This effort aims at the identification of the computational characteristics of the DOE miniapps developed at various exa-scale codesign centers.

Single Node Performance Evaluation for Exascale Codesign Center Miniapps

We started these efforts by evaluating the performance on a single node. We collected information about the application scaling behavior and also how they stress architectural resources such as the memory heirarchy and the execution unit. For details, visit the Overview section.

MPI Traces for Exascale Codesign Center Miniapps

We collected MPI traces for the mini-apps that use the MPI communication interface.

The traces are collected using the open source dumpi  toolkit. The traces, distributed from this site, are in binary format to save space. They are also compressed using tar and gzip utilities. Dumpi generates a trace per MPI rank, thus each archieve contains many trace files. 

After downloading a trace archieve, for instance cesar_Mocfe_256.tar.gz, one can uncompress the directory of traces using:

tar xzvf cesar_Mocfe_256.tar.gz

The directory will contain 256 trace file in a binary format. To have a readable format of each binary file, one can use dumpi2ascii tools distributed with dumpi.

We ran the trace collection experiments on Cray XE06 (Hopper) at NERSC. We used 64, 256, and 1024 cores per run. Some applications use MPI only, while others use hybrid programming mixing MPI with OpenMP. The traces carry both message timing, routing and size information. Timing information are related to Hopper machine, while information about message sizes and destinations are related to the problem sizes and the parallelization levels.

One can explore the available traces through the following urls: