SIGN-IN

Cluster saw.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect DDR InfiniBand
Total processors/cores 2712
Nodes
saw: 1‑336
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Compute
Notes: Compute nodes
Memory: 16.0 GB
Local storage: None
saw: 8001
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Admin
Notes: Admin node
Memory: 16.0 GB
Local storage: None
saw: 9001‑9002
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Login
Notes: Login nodes
Memory: 16.0 GB
Local storage: None
Total attached storage 127 TB
Suitable use

Parallel applications.

Software available

MATLAB, GAUSSIAN, LSDYNA, OPENJDK, CHARM++, CP2K, ECLIPSE, FREEFEM++, UTIL, ABAQUS, INTEL, SIESTA, ADF/BAND, LAMMPS, R, PYTHON, PARI/GP, CONVERGE, OCTAVE, NETCDF, MERCURIAL, OPENMPI, BLAST, ABINIT, NWCHEM, DAR, MPFUN90, GCC, MKL, OPEN64, PETSC_SLEPC, ORCA, SAMTOOLS, CMAKE, GIT, GNU , FFTW, ACML, BOOST, ESPRESSO, CDF, QD, TINKER, SUBVERSION, NCL, STAR-CCM+, HDF, GROMACS, MrBAYES, GMP, BINUTILS, MPC, BIOPERL, CPMD, PERL, SPRNG, MPFR, VIM, AMBER, VALGRIND, BIOSAMTOOLS, RLWRAP, MPFUN2015, TEXLIVE, YT, DLPOLY, PROOT, COREUTILS, NAMD, SUPERLU, PNETCDF, FDTD, GNUPLOT, GSL, MPIBLAST, SQ, BIOPERLRUN, ILOGCPLEX, IPM, PGI, OPENCV, LDWRAPPER, ARPACK-NG, CPAN, RUBY, NIX, GHC, VMD, ANSYS, EMACS, AUTODOCKVINA, GEANT4

Current system state details Graphs

Recent System Notices

Status Status Notes
Jul 29 2016, 04:28PM
(4 months ago)

Starting yesterday afternoon, /global/b has returned to service on all Sharcnet clusters. We apologize for the extended outage, and believe that there was no loss of data. In recognition of how this outage was disruptive to 1/3 of users, affected users will be given higher inter-cluster fairshare/priority.

If you notice problems, please report via the ticket system: help@sharcnet.ca or https://www.sharcnet.ca/my/problems/submit

Jul 25 2016, 12:39PM
(5 months ago)

saw cluster is now back online after the repair of the power feed to the server room.

Jul 22 2016, 10:06AM
(5 months ago)

In order to repair critical power infrastructure issue at the old University of Waterloo data centre, the SHARCNET orca, saw, tembo, angel and mosaic clusters will need to be powered down on July 25. The downtime is planned to take at most 1 day and the clusters will be available on July 26th. We apologize for any inconvenience this may cause

Jul 20 2016, 10:03AM
(5 months ago)

Due to a power failure at the server room, saw scratch and some compute nodes in saw are presently unavailable.

Jun 27 2016, 05:47PM
(5 months ago)

Saw has now returned to normal operations following the power outage at Waterloo. Any jobs that were running at the time of the failure were killed and will need to be resubmitted.

Sign-in to get full status history