From Documentation
Jump to: navigation, search
({{GrahamUpdate}})
(update for graham)
Line 1: Line 1:
{{GrahamUpdate}}
 
 
{{Software
 
{{Software
 
|package_name=PNETCDF
 
|package_name=PNETCDF
Line 7: Line 6:
  
 
=Introduction=
 
=Introduction=
 +
The Parallel-NetCDF library provides high-performance parallel I/O to/from files in popular NetCDF format.
  
The Parallel-NetCDF library provides high-performance parallel I/O to/from files in popular NetCDF format. The include files, library and binaries for pnetcdf 1.2.0 are installed in <i>/opt/sharcnet/pnetcdf/1.2.0/intel</i>.
+
=Parallel-NetCDF on new Compute Canada clusters (cedar and graham)=
  
=Version Selection=
+
The Parallel-NetCDF library version 1.8.1 is installed as a module on cedar and graham.  To load the module, execute:
 +
 
 +
module load pnetcdf/1.8.1
 +
 
 +
The library will then by linked when including '''-lpnetcdf''' as a compiler flag.
 +
 
 +
===Compiling the C code===
 +
 
 +
<pre>
 +
mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf
 +
</pre>
 +
 
 +
===Compiling the f90 code===
 +
 
 +
<pre>
 +
mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf
 +
</pre>
 +
 
 +
=Parallel-NetCDF on older SHARCNET clusters (eg. orca)=
 +
 
 +
==Version Selection==
 +
 
 +
Versions 1.5.0 and 1.6.1 are installed.  The include files, library and binaries are located installed in <i>/opt/sharcnet/pnetcdf/</i>.
  
 
<pre>module load pnetcdf/1.5.0</pre>
 
<pre>module load pnetcdf/1.5.0</pre>
  
=Example Linking=
+
==Example Linking==
  
==Link against the Parallel-NetCDF 1.2.0 library==
+
===Link against the Parallel-NetCDF 1.2.0 library===
  
 
<pre>
 
<pre>
Line 27: Line 49:
 
</pre>
 
</pre>
  
==Compiling the C code==
+
===Compiling the C code===
  
 
<pre>
 
<pre>
Line 33: Line 55:
 
</pre>
 
</pre>
  
==Compiling the f90 code==
+
===Compiling the f90 code===
  
 
<pre>
 
<pre>
Line 39: Line 61:
 
</pre>
 
</pre>
  
==Fortran90 example of a program that writes a 1D array distributed on 4 processes to a NetCDF file==
+
=Fortran90 example of a program that writes a 1D array distributed on 4 processes to a NetCDF file=
  
 
  program test
 
  program test

Revision as of 13:48, 1 November 2017

PNETCDF
Description: A High Performance API for NetCDF File Access
SHARCNET Package information: see PNETCDF software page in web portal
Full list of SHARCNET supported software


Introduction

The Parallel-NetCDF library provides high-performance parallel I/O to/from files in popular NetCDF format.

Parallel-NetCDF on new Compute Canada clusters (cedar and graham)

The Parallel-NetCDF library version 1.8.1 is installed as a module on cedar and graham. To load the module, execute:

module load pnetcdf/1.8.1

The library will then by linked when including -lpnetcdf as a compiler flag.

Compiling the C code

mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Compiling the f90 code

mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Parallel-NetCDF on older SHARCNET clusters (eg. orca)

Version Selection

Versions 1.5.0 and 1.6.1 are installed. The include files, library and binaries are located installed in /opt/sharcnet/pnetcdf/.

module load pnetcdf/1.5.0

Example Linking

Link against the Parallel-NetCDF 1.2.0 library

module unload openmpi
module unload intel
module load intel/11.1.069
module load openmpi/intel/1.4.3
module load pnetcdf/intel/1.2.0
<your usual compile command> $CPPFLAGS $LDFLAGS -lpnetcdf

Compiling the C code

mpicc pnetcdf-write-nfiles.c -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Compiling the f90 code

mpif90 test.f90 -o test $CPPFLAGS $LDFLAGS -lpnetcdf

Fortran90 example of a program that writes a 1D array distributed on 4 processes to a NetCDF file

program test
 use pnetcdf
 implicit none
 include 'mpif.h'
 integer :: rank,ncid,ierr,nout
 INTEGER(KIND=MPI_OFFSET_KIND) :: NX=4,start(1),count(1),bufcount=1
 integer :: x_dim
 integer :: dims=1,dimids(1),varid
 real :: mydata(1)
 call MPI_INIT(ierr)
 call MPI_COMM_RANK(MPI_COMM_WORLD,rank,ierr)
 mydata(1)=rank*2.0   ! example data
 nout = nfmpi_create(MPI_COMM_WORLD,"output1.nc",NF_CLOBBER,MPI_INFO_NULL,ncid)
 nout = nfmpi_def_dim(ncid,"x",NX,x_dim)
 dimids = (/x_dim/)
 nout = nfmpi_def_var(ncid,"xdata",NF_FLOAT,dims,dimids,varid)
 nout = nfmpi_enddef(ncid)
 start = (/rank+1/)
 count = (/1/)
 call nfmpi_put_vara_all(ncid,varid,start,count,mydata,bufcount,MPI_REAL)
 nout = nfmpi_close(ncid)
 call MPI_FINALIZE(ierr)
 stop
end program test

References

o Parallel Netcdf Project Homepage
http://trac.mcs.anl.gov/projects/parallel-netcdf