From Documentation
Jump to: navigation, search
Line 69: Line 69:
o Parallel Netcdf Project Homepage<br>
o Parallel Netcdf Project Homepage<br>
<!--checked2015--><!--opened ticket to notify james of new version-->

Revision as of 11:19, 12 November 2015

Description: A High Performance API for NetCDF File Access
SHARCNET Package information: see PNETCDF software page in web portal
Full list of SHARCNET supported software


The Parallel-NetCDF library provides high-performance parallel I/O to/from files in popular NetCDF format. The include files, library and binaries for pnetcdf 1.2.0 are installed in /opt/sharcnet/pnetcdf/1.2.0/intel.

Version Selection

module load pnetcdf/1.5.0

Example Linking

Link against the Parallel-NetCDF 1.2.0 library

module unload openmpi
module unload intel
module load intel/11.1.069
module load openmpi/intel/1.4.3
module load pnetcdf/intel/1.2.0
<your usual compile command> $CPPFLAGS $LDFLAGS -lpnetcdf

Compiling the C code

mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Compiling the f90 code

mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Fortran90 example of a program that writes a 1D array distributed on 4 processes to a NetCDF file

program test
 use pnetcdf
 implicit none
 include 'mpif.h'
 integer :: rank,ncid,ierr,nout
 INTEGER(KIND=MPI_OFFSET_KIND) :: NX=4,start(1),count(1),bufcount=1
 integer :: x_dim
 integer :: dims=1,dimids(1),varid
 real :: mydata(1)
 call MPI_INIT(ierr)
 mydata(1)=rank*2.0   ! example data
 nout = nfmpi_create(MPI_COMM_WORLD,"",NF_CLOBBER,MPI_INFO_NULL,ncid)
 nout = nfmpi_def_dim(ncid,"x",NX,x_dim)
 dimids = (/x_dim/)
 nout = nfmpi_def_var(ncid,"xdata",NF_FLOAT,dims,dimids,varid)
 nout = nfmpi_enddef(ncid)
 start = (/rank+1/)
 count = (/1/)
 call nfmpi_put_vara_all(ncid,varid,start,count,mydata,bufcount,MPI_REAL)
 nout = nfmpi_close(ncid)
 call MPI_FINALIZE(ierr)
end program test


o Parallel Netcdf Project Homepage