From Documentation
Revision as of 10:20, 6 June 2019 by Edward (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search
This page is scheduled for deletion because it is either redundant with information available on the CC wiki, or the software is no longer supported.
HDF
Description: Hierarchical Data Format
SHARCNET Package information: see HDF software page in web portal
Full list of SHARCNET supported software


Introduction

The sharcnet HDF installation provides a serial and parallel build of HDF5. See the Users Guide regarding how to use HDF5.

Graham and Cedar (also Orca)

Version Selection

module load hdf       // HDF4
module load hdf5      // Serial HDF5
module load hdf5-mpi  // Parallel (MPI) HDF5

Job Submission

Job script files to be used with sbatch command

o Serial Job

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --ntasks=1
#SBATCH --mem=1024M             # memory; default unit is megabytes
#SBATCH --time=0-00:05          # time (DD-HH:MM)
./h5_serial

o Mpi Job

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --ntasks=4              # number of MPI processes
#SBATCH --mem-per-cpu=1024M     # memory; default unit is megabytes
#SBATCH --time=0-00:05          # time (DD-HH:MM)
srun ./h5_parallel

Example Job

HDF library comes bundled up with some example code, which can be used to test the correctness of the library installation. For example:

module load hdf5-mpi
cp $EBROOTHDF5/share/hdf5_examples/c/h5_extend.c .
mpicc h5_extend.c -o h5_extend -lhdf5
mpirun -n 2 ./h5_extend
sbatch hdf5_job.sh

The job script hdf5_job.sh contains

#!/bin/bash
#SBATCH --account=def-someuser
#SBATCH --ntasks=2              # number of MPI processes
#SBATCH --mem-per-cpu=1024M     # memory; default unit is megabytes
#SBATCH --time=0-00:05          # time (DD-HH:MM)
srun ./h5_extend

Legacy systems (except for Orca)

Version Selection

module load hdf/serial/5.1.8.11
module load hdf/mpi/5.1.8.11

Job Submission

o Serial Job

sqsub -o out.log -r10m ./h5_serial

o Mpi Job

sqsub -o out.log -r10m -qmpi -n 4 ./h5_parallel

Example Job

HDF library comes bundled up with some example code, which can be used to test the correctness of the library installation. For example:

module load hdf/mpi/5.1.8.11
cp $HDF_HOME/share/hdf5_examples/c/h5_extend.c .
mpicc h5_extend.c -o h5_extend $CPPFLAGS $LDFLAGS -lhdf5
./h5_extend
sqsub -o out.log -r10m -qmpi -n 1 ./h5_extend

General Notes

After loading the hdf module, standard environment variables $CPPFLAGS and $LDFLAGS contain paths to the hdf include and library files, respectively. Simply add

$CPPFLAGS

to your compiler arguments at the compile stage, and add

$LDFLAGS -lhdf5

at the linking stage. You may also need to add "-lhdf5_fortran" for a Fortran code.

More explicitly, you can use

-I $HDF_HOME/include
-L $HDF_HOME/lib -lhdf5

References

o HDF5 Homepage
http://www.hdfgroup.org/HDF5/

o HDF5 Users Guide
http://www.hdfgroup.org/HDF5/doc/UG/UG_frame.html