From Documentation
Jump to: navigation, search


PETSC_SLEPC
Description: A suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations
SHARCNET Package information: see PETSC_SLEPC software page in web portal
Full list of SHARCNET supported software


PETSC on new Compute Canada systems (graham and cedar; also orca)

PETSC is installed on Compute Canada systems as a module, in standard :

module load petsc/3.7.5

and a version compiled with --with-64-bit-indices=1:

module load petsc-64bits/3.7.5

For information about submitting jobs, please see: Running jobs page at docs.computecanada.ca.

PETSC SLEPC on older SHARCNET systems

Introduction

This package compines two libraries:

  • PETSC: a suite of data structures and routines for the scalable (parallel) solution of scientific applications modeled by partial differential equations.
  • SLEPC: a software library for the solution of large scale sparse eigenvalue problems on parallel computers. (An extension of PETSC.)

Both libraries are MPI-enabled (parallelized), so to make a full advantage of the library one normally has to link to the MPI version ("flavor") of the library, and submit the code to the "mpi" queue with more than one cpu core (see below).

Version Selection

PETSC is notorious for introducing incompatibilities even between minor version changes, so it is often a good idea to work with a specific version of the library. SHARCNET provides the standard way of version selection, via module commands. The root module name for this package is petsc_slepc. E.g. to see all the versions of the package installed on the current cluster, execute

module avail petsc

This list will show multiple versions and also multiple "flavors" for each version available as separate modules. The currently installed flavors are

  • serial: non-parallel version for real numbers with 64-bit array indices;
  • mpi32: MPI version for real numbers with 32-bit array indices;
  • mpi: MPI version for real numbers with 64-bit array indices;
  • complex: MPI version for complex numbers with 64-bit array indices.

To load a specific module, use "module load xxx" command. E.g.:

module load petsc_slepc/mpi32/3.4.3

This will modify certain environmental variables to make it easy to use the library. Most critically, it will modify (or declare) the variables $PETSC_DIR and $SLEPC_DIR to point to the root directories of the PETSC and SLEPC libraries, respectively, of the requested version and flavor. It is very convenient to use these variables inside your makefile. This will also modify the variables $CPPFLAGS, $LDFLAGS, and $LD_RUN_PATH.

To switch to a different module (version and/or flavor), the failproof way is to first unload the current petsc module (if any), e.g.

module unload petsc_slepc/mpi32/3.4.3

and then load the module you want, e.g.

module load petsc_slepc/complex/3.4.3

Job Submission

If you are using a serial flavour of PETSC/SLEPC inside a serial (non-MPI) code, you submit your job as in the following example:

sqsub -r 60m -o ofile.%J -i myinputfile ./code [code args]

In all other cases (non-serial flavor of PETSC/SLEPC and/or MPI main program) you have to submit the job to the "mpi" queue, and specify the requested number of cpu cores, e.g.

sqsub -r 60m -q mpi -n 4 -o ofile.%J -i myinputfile ./code [code args]

Example Job

General Notes

Compiling it yourself

PETSC has so many different compiling options and optional third party components that it is not feasible to provide pre-compiled versions for all occasions. In addition, most of our packages are compiled using older versions of compilers and MPI libraries; if you need PETSC to be used with newer compilers, you would have to compile it yourself. (Or ask us to help you.)

Here are the typical compiling instructions for a recent PETSC with newer (v15) Intel compiler and matching newer OpenMPI library. This is the MPI version, real numbers, 32-bit indices. All these commands have to be executed on a login node of the cluster (the commands below need internet access, so development nodes will not work). First you have to download the source files for the required PETSC version here, unpack it, and then "cd" to the unpacked source directory. If you need to compile additional components, add the corresponding flags to the configure command below (e.g., to compile with Hypre, add "--download-hypre", to compile with Metis and ParMetis, add "--download-metis --download-parmetis" and so on; run "./configure --help" to see all flags).

$ module unload intel openmpi mkl
$ module load intel/15.0.3
$ module load openmpi/intel1503-std/1.8.7
$ export VER=3.7.6
$ export flavor=mpi32
$ export PETSC_DIR=$PWD
$ ./config/configure.py --prefix=/work/$USER/PETSC/$VER/$flavor --ignoreWarnings=1 --with-shared-libraries --with-scalar-type=real \
--with-blas-lapack-dir=/opt/sharcnet/intel/15.0.3/lib --with-mpi-shared-libraries=1 --with-x=0 --with-x11=0 \
--with-mpi-dir=/opt/sharcnet/openmpi/1.8.7/intel-15.0.3/std  --with-debugging=no  --with-64-bit-indices=0 --with-cuda=0

If the last command runs successfully, it will print the next command to execute (make PETSC_DIR=...). If that command succeeds as well, it will suggest to run "make ... install" command. Finally, you can execute the suggested "make ... test" command, to test the integrity of the installed library.

At this point your newly compiled PETSC library can be found in /work/$USER/PETSC/$VER/$flavor directory. To use it, execute

$ export PETSC_DIR=/work/$USER/PETSC/$VER/$flavor

References

o PETSC Homepage
http://www.mcs.anl.gov/petsc/

o SLEPC Homepage
http://www.grycap.upv.es/slepc/