Skip to content

NERSC Resource Usage Policies

NERSC allocates time on compute nodes and space on its file systems and HPSS system - accounting and charging for use of these resources are addressed below at Compute Node Usage Charging and HPSS Charges.

Filesystem usage is governed primarily through usage quotas, addressed below at File System Allocations. Appropriate use of file system resources is described in the NERSC Data Management Policy.

Usage reports are available through Iris for allocated resources.

Policies for the use of shared login node resources are described below in the NERSC Login Node Policy.

Queue usage policies outlined below include Intended Purpose of Available QOSs and Held jobs are deleted after 12 weeks

NERSC Login Node Policy

Appropriate Usage

Warning

Do not run compute- or memory-intensive applications on login nodes. These nodes are a shared resource. NERSC may terminate processes which are having negative impacts on other users or the systems.

On login nodes, typical user tasks include

  • Compiling codes (limit the number of threads, e.g., make -j 8)
  • Editing files
  • Submitting jobs

Some workflows require interactive use of applications such as IDL, MATLAB, NCL, python, and ROOT. For small datasets and short runtimes it is acceptable to run these on login nodes. For extended runtimes or large datasets these should be run in the batch queues.

Usage Limits

NERSC has implemented usage limits on Cori login nodes via Linux cgroup limits. These usage limits prevent inadvertent overuse of resources and ensure a better interactive experience for all NERSC users.

The following memory and CPU limits have been put in place on a per-user basis (i.e., all processes combined from each user) on Cori.

Node type memory limit CPU limit
login 128 GB 50%
workflow 128 GB 50%
jupyter 42 GB 50%

Note

Processes will be throttled to CPU limits

Warning

Processes may be terminated with a message like "Out of memory" when exceeding memory limits.

Avoid watch

If you must use the watch command, please use a much longer interval such as 5 minutes (=300 sec), e.g., watch -n 300 <your_command>.

Tips

NERSC provides a wide variety of QOSs

  • An interactive QOS is available on Cori for compute- and memory-intensive interactive work.
  • If you need to do a large number of data transfers use the dedicated xfer queue.

Tip

To help identify processes that make heavy use of resources, you can use:

  • top -u $USER
  • /usr/bin/time -v ./my_command

Tip

On Jupyter, there is a widget on the lower left-hand side of the JupyterLab UI that shows aggregate memory usage.

To access Cori system, see Connecting to Cori Login Nodes.

File System Allocations

Each user has a personal quota in their home directory and on the scratch file system, and each project has a shared quota on the Community File System. NERSC imposes quotas on space utilization as well as inodes (number of files). For more information about these quotas please see the file system quotas page.

HPSS Charges

HPSS charging is based on allocations of space in GBs which are awarded into accounts called HPSS projects. If a login name belongs to only one HPSS project, all its usage is charged to that project. If a login name belongs to multiple HPSS projects, its daily charge is apportioned among the projects using the project percents for that login name. Default project percents are assigned by Iris based on the size of each project's storage allocation.

Users can view their project percents on the "Storage" tab in the user view in Iris. To change your project percents, change the numbers in the "% Charge to Project" column.

For more detailed information about HPSS charging please see HPSS charging.

Compute Node Usage Charging

When a job runs on a NERSC supercomputer, charges accrue against one of the user's projects. The unit of accounting for these charges is the "NERSC Hour". The total number of NERSC hours a job costs is a function of:

  • the number of nodes and the walltime used by the job,
  • the QOS of the job, and
  • the "charge factor" for the system upon which the job was run.

Charge factors are set by NERSC to accommodate for the relative power of the architecture and the scarcity of the resource.

The job-cost formula, along with charge factors for each system and queue, are outlined in Queues and Charges

Charges are based on resources occupied

Job charges are based on the footprint of the job: the space (in terms of nodes) and time (in terms of wallclock hours) that the job occupies on NERSC resources.

Job charges are based on the number of nodes that the job took away from the pool of available resources. A job that was allocated 100 nodes and ran on only one of the nodes will still be charged for the use of 100 nodes.

Likewise, job charges are based on the actual amount of time (to the nearest second) that the job occupied resources, not the requested walltime or the amount of time spent doing computations. So a job that requested 12 hours but ran for only 3 hours and 47 minutes would be charged for 3 hours and 47 minutes, and a job that computed for three minutes and spent the remainder of its 12-hour walltime in an infinite loop would be charged for the full 12 hours.

Note

Because a reservation takes up space and time that could be otherwise used by other users' jobs, users are charged for the entirety of any reservation they request, including any time spent rebooting nodes and any gaps in which no jobs are running in the reservation.

Note

Reservations are always charged at standard rates and are not eligible for any discounts, no matter the size.

Big Job Discount

A job running on a significant fraction of NERSC's newest production system in the regular queue receives a charging discount. The eligibility conditions and charge factors are tabulated under Queues and Charges.

Running out of Allocation

Accounting information for the previous day is finalized in Iris once daily (in the early morning, Pacific Time). At this time actions are taken if a project or user balance is negative.

If a project runs out of time (or space in HPSS) all login names which are not associated with another active project are restricted:

  • On computational machines, restricted users are able to log in, but cannot submit batch jobs or run parallel jobs, except to the "overrun" partition.
  • For HPSS, restricted users are able to read data from HPSS and delete files, but cannot write any data to HPSS.

Login names that are associated with more than one project (for a given resource -- compute or HPSS) are checked to see if the user has a positive balance in any of their projects (for that resource). If they do have a positive balance (for that resource), they will not be restricted and the following will happen:

  • On computational machines the user will not be able to charge to the restricted project. If the restricted project had been the user's default project, they will need to change their default project through Iris, or specify a different project with sufficient allocation when submitting a job, or run jobs in overrun only.

Likewise, when a user goes over their individual user quota in a given project, that user is restricted if they have no other project to charge to. A PI or Project Manager can change the user's quota.

Usage Reports

In Iris, users can view graphs of their own compute and storage usage under the "Jobs" and "Storage" tabs in the user view, respectively. Likewise a user can view the compute and storage usage of their projects under the same tabs in the project view in Iris.

In addition, there is a "Reports" menu at the top of the page from which users can create reports of interest. For more information please see the Iris Users Guide.

Intended Purpose of Available QOSs

There are many different QOSs at NERSC, each with a different purpose. Most jobs should use the "regular" QOS.

Perlmutter

Regular

The standard queue for most production workloads.

Interactive

Code development, testing, debugging, analysis and other workflows in an interactive session. Jobs should be submitted as interactive jobs, not batch jobs.

A pool of 50 nodes are reserved during business hours for interactive use, are are released overnight for large-scale jobs.

Cori

Regular

The standard queue for most production workloads.

Debug

Code development, testing, and debugging. Production runs are not permitted in the debug QOS. User accounts are subject to suspension if they are determined to be using the debug QOS for production computing. In particular, job "chaining" in the debug QOS is not allowed. Chaining is defined as using a batch script to submit another batch script.

Interactive

Code development, testing, debugging, analysis and other workflows in an interactive session. Jobs should be submitted as interactive jobs, not batch jobs.

Premium

Jobs needing faster turnaround for unexpected scientific emergencies where results are needed right away. NERSC has a target of keeping premium usage at or below 10 percent of all usage. Premium should be used infrequently and with care.

Warning

The charge factor for premium will increase once a project has used 20 percent of its allocation on premium. PIs will be able to control which of their users can use premium for their allocation. For instruction on adding users to premium queue please see Enabling the premium QOS.

Note

Premium jobs are not eligible for discounts.

Low

Non-urgent jobs that can accept lower priority and incur a lower usage charge.

Flex

Jobs that can produce useful work with a relatively short flexible amount run time. The flex queue has a low charge factor but requires jobs to allow the scheduler to shorten their requested walltime in order to fill a gap in the schedule. Flex queue usage is described under variable time jobs.

Overrun

Projects whose NERSC-hours balance is zero or negative. The charging rate for this QOS is 0 and it has the lowest priority on all systems.

Note

Users who have a zero or negative balance in a project that has a positive balance cannot submit to the Overrun queue. PIs can adjust a project member's share of the project allocation in Iris, instructions are in the Iris guide for PIs.

Realtime

The "realtime" QOS is only available via special request. It is intended for jobs that are connected with an external realtime component that requires on-demand processing.

Compile

The compile QOS is intended for workflows that regularly compile codes from source such as the compiling stage in DevOps models that leverage continuous integration. Jobs are run on one Cori Haswell node and there is no charge for using the compile QOS.

Held jobs are deleted after 12 weeks

User held jobs that were submitted more than 12 weeks ago will be deleted.