Transparent_banner
home || sign-in || register ||

Cluster saw.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect DDR InfiniBand
Total processors/cores 2712
Nodes
saw: 1‑336
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Compute
Notes: Compute nodes
Memory: 16.0 GB
Local storage: None
saw: 8001
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Admin
Notes: Admin node
Memory: 16.0 GB
Local storage: None
saw: 9001‑9002
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Login
Notes: Login nodes
Memory: 16.0 GB
Local storage: None
Total attached storage 127 TB
Suitable use

Parallel applications.

Software available

MATLAB, GAUSSIAN, LSDYNA, OPENJDK, CHARM++, CP2K, ACML, ECLIPSE, FREEFEM++, UTIL, ABAQUS, INTEL, SIESTA, ADF/BAND, LAMMPS, R, PYTHON, PARI/GP, CONVERGE, OCTAVE, NETCDF, MERCURIAL, OPENMPI, BLAST, ABINIT, NWCHEM, DAR, MPFUN90, GCC, MKL, OPEN64, PETSC_SLEPC, ORCA, SAMTOOLS, CMAKE, GIT, GNU , FFTW, BOOST, ESPRESSO, CDF, QD, GEANT4, TINKER, SUBVERSION, STAR-CCM+, HDF, GROMACS, MrBAYES, GMP, BINUTILS, MPC, BIOPERL, CPMD, PERL, SPRNG, MPFR, VIM, AMBER, VALGRIND, BIOSAMTOOLS, RLWRAP, MPFUN2015, TEXLIVE, YT, DLPOLY, PROOT, COREUTILS, NAMD, SUPERLU, PNETCDF, FDTD, GNUPLOT, GSL, MPIBLAST, SQ, BIOPERLRUN, ILOGCPLEX, IPM, PGI, OPENCV, LDWRAPPER, ARPACK-NG, CPAN, RUBY, NIX, GHC, VMD, ANSYS, EMACS, AUTODOCKVINA

Current system state details Graphs

Recent System Notices

Status Status Notes
Jul 25 2016, 12:39PM
(about 5 hours ago)

saw cluster is now back online after the repair of the power feed to the server room.

Jul 22 2016, 10:06AM
(3 days ago)

In order to repair critical power infrastructure issue at the old University of Waterloo data centre, the SHARCNET orca, saw, tembo, angel and mosaic clusters will need to be powered down on July 25. The downtime is planned to take at most 1 day and the clusters will be available on July 26th. We apologize for any inconvenience this may cause

Jul 20 2016, 10:03AM
(5 days ago)

Due to a power failure at the server room, saw scratch and some compute nodes in saw are presently unavailable.

Jun 27 2016, 05:47PM
(28 days ago)

Saw has now returned to normal operations following the power outage at Waterloo. Any jobs that were running at the time of the failure were killed and will need to be resubmitted.

Jun 27 2016, 08:20AM
(28 days ago)

Due to a power outage at Waterloo, Orca, Saw, Angel, Brown, and Tembo are presently offline.

We are presently working to return the clusters to operational status, and will post further updates when they are returned to service.

Sign-in to get full status history