Access¶
Perlmutter is not yet available for general user access.
Perlmutter will be available to users in several stages, in the following order:
- The NESAP (NERSC Exascale Science Applications Program) tier 1 and ECP (Exascale Computing Project) teams
- The NESAP tier 2 and Superfacility teams
- Selected general users running GPU applications
- Remaining general users running GPU applications
- Remaining users
Connecting to Perlmutter¶
In order to connect to Perlmutter you must connect to Cori or a DTN and then connect to Perlmutter as follows:
ssh perlmutter
Transferring Data to / from Perlmutter Scratch¶
Perlmutter scratch is only accessible from Perlmutter login or compute nodes. To transfer data to Perlmutter scratch it is recommended that you transfer the data to the Community File System (which is available on Perlmutter) either with Globus, or a cp
, or rsync
on a Data Transfer Node. Once the data is on the Community File System, you can use cp
, or rsync
from a Perlmutter login node to copy the data to Perlmutter scratch. Alternatively, you could use scp
or rsync
to copy the data remotely to Perlmutter scratch, but this can be easily interrupted and is currently not as fast as the method that goes through the Community File System.
Preparing for Perlmutter¶
Please check the Transitioning Applications to Perlmutter webpage for a wealth of useful information on how to transition your applications for Perlmutter.
Compiling/Building Software¶
You can find info below on how to set the proper programming environment and compile your code on Perlmutter:
- Compilers at NERSC
- Using Python on Perlmutter
- Environment
- Lmod, a Lua-based module system used on Perlmutter
- Finding and using software on Perlmutter
Running Jobs¶
Perlmutter uses Slurm for batch job scheduling. Below you can find info on the queue policies, how to submit jobs using Slurm and monitor jobs, etc.:
- Slurm
- Queue Policies on Perlmutter
- Running Jobs on Perlmutter's GPU nodes
- Monitoring Jobs
- Interactive Jobs
To run a job on Perlmutter GPU nodes, you must submit the job using a project GPU allocation account name, which ends in _g
(e.g., m9999_g
). An account name without the trailing _g
is for charging CPU jobs on Cori and Phase 2 CPU-only nodes.
During Allocation Year 2021 jobs run on Perlmutter will be free of charge.