From Documentation
This page is scheduled for deletion because it is either redundant with information available on the CC wiki, or the software is no longer supported. |
Contents
PNETCDF |
---|
Description: A High Performance API for NetCDF File Access |
SHARCNET Package information: see PNETCDF software page in web portal |
Full list of SHARCNET supported software |
Introduction
The Parallel-NetCDF library provides high-performance parallel I/O to/from files in popular NetCDF format.
Parallel-NetCDF on new Compute Canada clusters (cedar and graham)
The Parallel-NetCDF library version 1.8.1 is installed as a module on cedar and graham. To load the module, execute:
module load pnetcdf/1.8.1
The library will then by linked when including -lpnetcdf as a compiler flag.
Compiling the C code
mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf
Compiling the f90 code
mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf
Parallel-NetCDF on older SHARCNET clusters (eg. orca)
Version Selection
Versions 1.5.0 and 1.6.1 are installed. The include files, library and binaries are located installed in /opt/sharcnet/pnetcdf/.
module load pnetcdf/1.5.0
Example Linking
Link against the Parallel-NetCDF 1.2.0 library
module unload openmpi module unload intel module load intel/11.1.069 module load openmpi/intel/1.4.3 module load pnetcdf/intel/1.2.0 <your usual compile command> $CPPFLAGS $LDFLAGS -lpnetcdf
Compiling the C code
mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf
Compiling the f90 code
mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf
Fortran90 example of a program that writes a 1D array distributed on 4 processes to a NetCDF file
program test use pnetcdf implicit none include 'mpif.h' integer :: rank,ncid,ierr,nout INTEGER(KIND=MPI_OFFSET_KIND) :: NX=4,start(1),count(1),bufcount=1 integer :: x_dim integer :: dims=1,dimids(1),varid real :: mydata(1) call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr) mydata(1)=rank*2.0 ! example data nout = nfmpi_create(MPI_COMM_WORLD,"output1.nc",NF_CLOBBER,MPI_INFO_NULL,ncid) nout = nfmpi_def_dim(ncid,"x",NX,x_dim) dimids = (/x_dim/) nout = nfmpi_def_var(ncid,"xdata",NF_FLOAT,dims,dimids,varid) nout = nfmpi_enddef(ncid) start = (/rank+1/) count = (/1/) call nfmpi_put_vara_all(ncid,varid,start,count,mydata,bufcount,MPI_REAL) nout = nfmpi_close(ncid) call MPI_FINALIZE(ierr) stop end program test
References
o Parallel Netcdf Project Homepage
http://trac.mcs.anl.gov/projects/parallel-netcdf