Skip to content

Compiler Wrappers

NERSC provides compiler wrappers on Perlmutter and Cori which combine the native compilers (Intel, GNU, Cray (HPE Cray Compilers), NVIDIA, and AOCC) with MPI and various other libraries, to enable streamlined compilation of scientific applications.

HPE Cray Compiler Wrappers

HPE Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with MPI libraries and other HPE Cray system software. All MPI and Cray system directories are also transparently imported. In addition, the wrappers cross-compile for the appropriate compute node architecture, based on which craype-<arch> module is loaded when the compiler is invoked, where the possible values of <arch> are discussed below.

Compiler wrappers target compute nodes, not login nodes

The intention is that programs are compiled on the login nodes and executed on the compute nodes. Because the compute nodes and login nodes have different hardware and software, executables cross-compiled for compute nodes may fail if run on login nodes. The wrappers mentioned above guarantee that codes compiled using the wrappers are prepared for running on the compute nodes.

KNL-specific compiler flags should be used for codes running on KNL nodes

On Cori there are two types of compute nodes: Haswell and KNL. While applications cross-compiled for Haswell do run on KNL compute nodes, the converse is not true (applications compiled for KNL will fail if run on Haswell compute nodes). Additionally, even though a code compiled for Haswell will run on a KNL node, it will not be able to take advantage of the wide vector processing units available on KNL. Consequently, one should specifically target KNL nodes during compilation in order to achieve the highest possible code performance. Please see below for more information on how to compile for KNL compute nodes.

Basic Example

The HPE Cray compiler wrappers replace other compiler wrappers commonly found on computer clusters, such as mpif90, mpicc, and mpic++. By default, the HPE Cray wrappers include MPI libraries and header files, as well as the many scientific libraries included in HPE Cray LibSci.

For detailed information on using a particular compiler suite, please check the webpage.

Fortran

ftn -o example.x example.f90

C

cc -o example.x example.c

C++

CC -o example.x example.cpp

By default (from cdt/19.06 onwards on Cori, and for all cpe's on Perlmutter), the Cray compiler wrappers build dynamically linked executables. To build statically linked executables, just add the -static flag to the command and link lines, or set CRAYPE_LINK_TYPE=static in the environment,

cc -static -o example.x example.c

or

export CRAYPE_LINK_TYPE=static
cc -o example.x example.c

Static linking can fail on Perlmutter

HPE Cray provides static and dynamic PE (MPI, LibSci, etc.) libraries for Perlmutter. When building executables, users would not have to add linking flags for PE libraries as the compiler wrappers would do the necessary work underneath. However, it is observed that attempting to build statically linked executables can fail as the compiler wrappers may not properly link necessary static PE libraries.

Usage Tips

Use compiler wrappers in ./configure

When compiling an application which uses the standard series of ./configure, make, and make install, often specifying the compiler wrappers in the appropriate environment variables is sufficient for a configure step to succeed:

./configure CC=cc CXX=CC FC=ftn

Set the accelerator target to GPUs for CUDA-aware MPI on Perlmutter

When building an application that uses CUDA-aware MPI, you must set the accelerator target to nvidia80 via the compile flag -target-accel=nvidia80 or the environment variable CRAY_ACCEL_TARGET. It's because the GTL (GPU Transport Layer) library needs to be linked for MPI communication involving GPUs, and setting the target can detect the library. If you don't do that, you may get the following runtime error:

MPIDI_CRAY_init: GPU_SUPPORT_ENABLED is requested, but GTL library is not linked

For more info, see the section on setting the the accelerator target.

Use cdt or cpe modules to control versions of Cray PE modules

To use a non-default CDT (Cray Developer Toolkit) on Cori or CPE (Cray Programming Environment) on Perlmutter version, which includes craype, cray-libsci, cray-mpich, etc. from the specific version, one could issue the following commands first. Below is an example on Cori:

module load cdt/<the-non-default-version>
export LD_LIBRARY_PATH=$CRAY_LD_LIBRARY_PATH:$LD_LIBRARY_PATH

Then, compile and run as usual.

Intel Compiler Wrappers on Cori

Although the Cray compiler wrappers cc, CC, and ftn, are the default (and recommended) compiler wrappers on the Cori system, wrappers for Intel MPI are provided as well via the the impi module.

The Intel MPI wrapper commands are mpiicc, mpiicpc, and mpiifort, which are analogous to cc, CC, and ftn from the Cray wrappers, respectively. Same as the Cray wrappers, the default link type for the Intel wrappers is dynamic, not static.

Intel MPI may be slower than Cray MPI on Cray systems

Although Intel MPI is available on the Cray systems at NERSC, it is not tuned for high performance on the high speed network on these systems. Consequently, it is possible, even likely, that MPI application performance will be lower if compiled with Intel MPI than with Cray MPI.

Intel MPI wrappers work only with Intel compilers

If one chooses to use the Intel MPI compiler wrappers, they are compatible only with the Intel compilers icc, icpc, and ifort. They are incompatible with the Cray and GCC compilers.

Intel MPI wrappers must specify architecture flags explicitly

While the Cray compiler wrappers cross-compile source code for the appropriate architecture based on the craype-<arch> modules (e.g., craype-haswell for Haswell code and craype-mic-knl for KNL code), the Intel wrappers do not. The user must apply the appropriate architecture flags to the wrappers manually, e.g., adding the -xMIC-AVX512 flag to compile for KNL.

Intel MPI wrappers do not link to Cray libraries by default

Unlike the Cray compiler wrappers, the Intel compiler wrappers do not automatically include and link to scientific libraries such as LibSci. These libraries must be included and linked manually if using the Intel MPI wrappers.

Compiling

The Intel compiler wrappers function similarly to the Cray wrappers cc, CC, and ftn. However a few extra steps are required. To compile with the Intel MPI wrappers, one must first load the impi module.

module load impi
mpiifort -xMIC-AVX512 -o example.x example.f90

Running

To run an application compiled with Intel MPI, one must load the impi module, and then issue the same srun commands as typical for an application compiled with the Cray wrappers (example).