From Documentation
Jump to: navigation, search
Line 1: Line 1:

Revision as of 10:27, 6 June 2019

This page is will be deleted pending it's creation on the CC wiki.
Description: MPI profiler
SHARCNET Package information: see MAP software page in web portal
Full list of SHARCNET supported software

Note: This software requires a graphical connection to SHARCNET.
Please consult our Remote Graphical Connections page for instructions.

Graham instructions

DDT and MAP can now be used on our newest cluster, Graham.

MAP doesn't require any instrumentation (no changes to the code, no need to link to additional libraries). The only special instruction is to ad "-g" compiler switch - the same way it is done for code debugging. Unlike debugging, where normally all the optimizations are turned off (-O0), for MAP one can use any required optimization flags (e.g. "-O2"). So say for C code code.c the compiling command can be

mpicc -O2 -g code.c -o code

For small to medium size jobs map can be used interactively. First one has to allocate the node(s) for the profiling job with salloc (accepts many of the sbatch arguments), e.g.:

salloc --time=0-1:00 --mem-per-cpu=4G --ntasks=4

Once the resource is allocated, you will get the shell at the allocated node. There you have to execute these commands:

module load openmpi/2.0.2
module load allinea-cpu

(The first command is required because of a bug in the newer OpenMPI module which interferes with MAP.)

Then you can run map interactively:

map ./code

When done, exit the shell (this will terminate the allocation).

For large MPI jobs one can submit a MAP job to the scheduler. Execute the above two module load commands, and add the following line at the end of your job script:

map -n 24 -profile  ./code

The above example will run map with the code "./code" on 24 cpu cores, with map messages going to map.log file. It will create a profiling file code*.map, which can later be analyzed interactively:



o MAP Homepage