SIGN-IN

Cluster saw.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect DDR InfiniBand
Total processors/cores 2712
Nodes
saw: 1‑336
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Compute
Notes: Compute nodes
Memory: 16.0 GB
Local storage: None
saw: 8001
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Admin
Notes: Admin node
Memory: 16.0 GB
Local storage: None
saw: 9001‑9002
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.83 GHz
Type: Login
Notes: Login nodes
Memory: 16.0 GB
Local storage: None
Total attached storage 127 TB
Suitable use

Parallel applications.

Software available

MATLAB, GAUSSIAN, LSDYNA, OPENJDK, CHARM++, CP2K, STAR-CCM+, FREEFEM++, UTIL, ABAQUS, INTEL, SIESTA, ADF/BAND, LAMMPS, R, PYTHON, PARI/GP, CONVERGE, OCTAVE, NETCDF, MERCURIAL, OPENMPI, BLAST, GCC, ABINIT, NWCHEM, DAR, MPFUN90, MKL, OPEN64, PETSC_SLEPC, ORCA, SAMTOOLS, CMAKE, GIT, GNU , FFTW, ACML, BOOST, ESPRESSO, CDF, QD, TINKER, SUBVERSION, NCL, HDF, GROMACS, MrBAYES, GMP, BINUTILS, MPC, BIOPERL, CPMD, PERL, SPRNG, MPFR, VIM, AMBER, VALGRIND, BIOSAMTOOLS, RLWRAP, MPFUN2015, TEXLIVE, YT, DLPOLY, PROOT, COREUTILS, NAMD, SUPERLU, PNETCDF, FDTD, GNUPLOT, GSL, MPIBLAST, SQ, BIOPERLRUN, ILOGCPLEX, IPM, PGI, OPENCV, LDWRAPPER, ARPACK-NG, CPAN, RUBY, NIX, GHC, VMD, ANSYS, EMACS, AUTODOCKVINA, GEANT4, NCBIC++TOOLKIT

Current system state details Graphs

Recent System Notices

Status Status Notes
May 25 2017, 12:56PM
(3 days ago)

SAW cluster is now back operational

May 23 2017, 01:34PM
(5 days ago)

There was a power failure at UWaterloo this morning. All running jobs were killed. We are currently working on bringing the systems back up.

Mar 17 2017, 10:19AM
(2 months ago)

Saw is back operational

Feb 23 2017, 11:42AM
(3 months ago)

Some users might encounter issues while accessing saw scratch. We are working to resolve this issue

Jul 29 2016, 04:28PM
(10 months ago)

Starting yesterday afternoon, /global/b has returned to service on all Sharcnet clusters. We apologize for the extended outage, and believe that there was no loss of data. In recognition of how this outage was disruptive to 1/3 of users, affected users will be given higher inter-cluster fairshare/priority.

If you notice problems, please report via the ticket system: help@sharcnet.ca or https://www.sharcnet.ca/my/problems/submit

Sign-in to get full status history