From Documentation
Revision as of 16:22, 8 February 2019 by Syam (Talk | contribs) (My program exited with an error code XXX - what does it mean?)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Programming and Debugging

What is MPI?

MPI stands for Message Passing Interface, a standard for writing portable parallel programs which is well-accepted in the scientific computing community. MPI is implemented as a library of subroutines which is layered on top of a network interface. The MPI standard has provided both C/C++ and Fortran interfaces so all of these languages can use MPI. There are several MPI implementations, including OpenMPI and MPICH. Specific high-performance interconnect vendors also provide their own libraries - usually a version of MPICH layered on an interconnect-specific hardware library.

For an MPI tutorial refer to the MPI tutorial.

In addition to C/C++ and Fortran versions of MPI, there exist other language bindings as well. If you have any special needs, please contact us.

What is OpenMP?

OpenMP is a standard for programming shared memory systems using threads with compiler directives instrumented in the source code. It provides a higher-level approach to utilizing multiple processors within a single machine while keeping the structure of the source code as close to the conventional form as possible. OpenMP is much easier to use than the alternative (Pthreads) and thus is suitable for adding modest amounts of parallelism to pre-exiting code. Because OpenMP is a set of programs, your code can still be compiled by a serial compiler and should still behave the same.

OpenMP for C/C++ and Fortran are supported by many compilers, including the PathScale and PGI for Opterons, and the Intel compilers for IA32 and IA64 (such as SGI's Altix.). OpenMP support has been provided in the GNU compiler suite since v4.2 (OpenMP 2.5), and starting with v4.4 supports the OpenMP 3.0 standard.

How do I run an OpenMP program with multiple threads?

An OpenMP program uses a single process with multiple threads rather than multiple processes. On multicore (i.e practically all) systems, threads will be scheduled on available processors, thus run concurrently. In order for each thread to run on one processor, one needs to request the same number of CPUs as the number of threads to use. To run an OpenMP program foo that uses four threads, use the following job submission script.

The option --cpus-per-task=4 specifies to reserve 4 CPUs per process.

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --time=0-0:5
#SBATCH --cpus-per-task=4
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
./ompHello

For a basic OpenMP tutorial refer to OpenMP tutorial.

How do I measure the cpu time when running multi-threaded job?

If you submit a job through the scheduler, then timing information will be collected by the scheduler itself and stored for later query.

If you are running an OpenMP program interactively, you can use the time utility to collect information.

In a typical example using 8 threads,

export OMP_NUM_THREADS=8
time ./ompHello

Your output will be something like:

real	0m1.633s
user	0m1.132s
sys	0m0.917s

In this example the real and user time are comparable, so the particular example program is not benefitting from multithreading. In general, real time should be less than user time if parallel execution is occurring.

What mathematics libraries are available?

Every system has the basic linear algebra libraries BLAS and LAPACK installed. Normally, these interfaces are contained in vendor-tuned libraries. On Intel-based (Xeon) clusters it's probably best to use the Intel math kernel library (MKL). On Opteron-based clusters, AMD's ACML library is available. However, either library will work reasonably well on both types of systems. If one expects to do a large amount of computation, it is generally advisable to benchmark both libraries so that one selects the one offering best performance for a given problem and system.

One may also find the GNU scientific library (GSL) useful to some point for their particular needs. The GNU scientific library is an optional package, available on any machine.

For a detailed list of libraries on each clusters, please check the documentation on the corresponding SHARCNET satellite web sites

How do I use mathematics libraries such as BLAS and LAPACK routines?

First you need to know which subroutine you want to use. You need to check the references to find what routines meet your needs. Then place calls to those routines you want in your program and compile your program to use the particular libraries that have those routines. For instance, if you want compute the eigenvalues, and optionally the eigenvectors, of an N by N real non symmetric matrix in double precision, you find the LAPACK routine DGEEV will do that. All you need to do is to have a call to DGEEV, with required parameters as specified in the LAPACK document, and compile your program to link against the LAPACK library.

Now to compile the program, you need to link it to a library that contains the LAPACK routines you call in your code. The general recommendation is to use Intel's MKL library, which has a module loaded by default on most Compute Canada/SHARCNET systems. The instructions on how to link your code with these libraries at compile time are provided on the MKL page.


My code is written in C/C++, can I still use those libraries?

Yes. Most of the libraries have C interfaces. If you are not sure about the C interface or you need assistance in using those libraries written in Fortran, we can help you out on a case to case basis.

What packages are available?

Various packages have been installed on Compute Canada/SHARCNET clusters at users' requests. The full, up to date list is available on the Compute Canada documentation wiki (link). If you do not see a package that you need on this list, please request it by submitting a problem ticket.

You can also search this wiki or the Compute Canada wiki for the package you are interested in to see if there is any additional information about it available.

We can also help you with compiling/installing a package into your own file space if you prefer that approach.

What interconnects are used on SHARCNET clusters?

Currently, several different interconnects are being used on SHARCNET clusters: Quadrics, Myrinet, InfiniBand and standard IP-based ethernet.

Debugging serial and parallel programs

Debugger is a program which helps to identify mistakes ("bugs") in programs - either run-time, or "post-mortem" (by analyzing the core file produced by a crashed program). Debuggers can be either command-line, or GUI (graphical user interface) based. Before a program can be debugged, it needs to be (re-)compiled with a switch, -g, which tells the compiler to include symbolic information into the executable. For MPI problems on the HP XC clusters, -ldmpi includes the HP MPI diagnostic library, which is very helpful for discovering incorrect use of the API.

SHARCNET highly recommends using our commercial debugger DDT. It has a very friendly GUI, and can also be used for debugging serial, threaded, MPI, and CUDA (GPGPU) programs. A short description of DDT and cluster availability information can be found on its software page. Please also refer to our detailed Parallel Debugging with DDT tutorial.

SHARCNET also provides gdb (installed on all clusters, type "man gdb" to get a list of options and see our Common Bugs and Debugging with gdb tutorial).


What is NaN ?

NaN stands for "Not a Number". It is an undefined or unrepresentable value, typically encountered in floating point arithmitic (eg. the square root of a negative number). To debug this in your program one typically has to unmask or trap floating point exceptions. This is fairly straightforward with Fortran compilers (e.g. with the Intel's ifort one simply needs to add one switch, "-fpe0"), but somewhat more complicated with C/C++ codes, where the best solution is to use feenableexcept() function. There are further details in the Common Bugs and Debugging with gdb tutorial.

My program exited with an error code XXX - what does it mean?

Your application crashed, producing an error code XXX (where XXX is a number). What does it mean? The answer may depend on your application. Normally, user codes are not touching the first 130 or so error codes, which are reserved for the Operational System level error codes. On most of our clusters, typing

 perror  XXX

will print a short description of the error. (This is a MySQL utility, and for XXX>122 it will start printing only MySQL-related error messages.) The accurate for the current OS (operational system) list of system error codes can be found on our clusters by printing the content of the file /usr/include/asm-x86_64/errno.h (/usr/include/asm-generic/errno.h on some systems).