From Documentation
Revision as of 09:26, 30 August 2016 by Edward (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Current high-performance computing (HPC) systems feature a hierarchical hardware design: distributed memory across nodes and shared memory with multi-core within each node. Parallel programming can combine distributed memory parallelization (MPI) with shared memory parallelization inside each node (OpenMP) to achieve overall performance, to reduce communication needs and memory consumption, or to improve load balance for some applications.

In this seminar, we will describe the difference between message passing and shared memory models, the basic principles in hybrid parallel programming approach, how to program basic hybrid codes, and finally we will talk about how to compile and execute hybrid MPI+OpenMP code on SHARCNET clusters.

This seminar is for those who have basic understanding of MPI and OpenMP, and also for those who use third-party hybrid software on SHARCNET clusters.