Distributed Memory Programming With MPI


The MPI (Message Passing Interface) API is a widely used standard set of interfaces for programming parallel computers ranging from multicore laptops to large-scale SMP servers and clusters. Its versatility and wide range of applicability make it a standard system for high-performance programming.

This workshop is directed at current or prospective users of parallel computers who want to significantly improve the performance of their programs by “parallelizing” the code on a wide range of platforms. We do not require any prior background in parallel computing, but some experience with programming in either Fortran or C is useful.

The content of the course ranges from introductory to intermediate. On the first day, we give a brief introduction to parallel programming and introduce MPI using simple examples. We outline the usage of about a dozen routines to familiarize users with the basic concepts of MPI programming. We then discuss some simple parallel models that can be programmed with this limited set of MPI routines. We also discuss the distribution of memory.

On the second day, we move on to more advanced issues of MPI programming, such as the definition and usage of user-defined data types and the usage of parallel input-output. In the afternoon we discuss at some length how the combination of MPI programming with simple multi-threading through compiler directives (OpenMP) can dramatically improve the utilization of modern clusters.

Throughout this workshop we will perform simple exercises on a dedicated cluster to apply our newly gained knowledge practically.

Instructor: Hartmut Schmider, HPCVL, Queen's University.

Prerequisites: Basic Fortran or C programming.