SIGN-IN

Cluster monk.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer IBM
Operating System CentOS
Interconnect QDR Infiniband
GPU 2x M2070 per node
Total processors/cores 432
Nodes
monk: 1‑54
8 cores
2 sockets x 4 cores per socket
Intel E5607 @ 2.26 GHz
Type: Compute
Memory: 48.0 GB
Local storage: 0 Bytes
Total attached storage 66 TB
Suitable use

Parallel applications with GPU acceleration

Software available

GCC, UTIL, GSL, GROMACS, OCTAVE, MPFUN2015, PETSC_SLEPC, FFTW, SQ, SAMTOOLS, BIOPERL, OPENCV, EMACS, OPENMPI, INTEL, VIM, NETCDF, MPFR, AMBER, PYTHON, BIOSAMTOOLS, BINUTILS, BOOST, CPAN, NCL, NCBIC++TOOLKIT, GHC, R, PERL, CUDA, BIOPERLRUN, MrBAYES, GMP, SYSTEM, SPRNG, SUBVERSION, NAMD, OPEN64, MPC, PNETCDF, BLAST, GNUPLOT, PGI, COREUTILS, TEXLIVE, GIT, LLVM, OPENJDK, CMAKE, HDF, TINKER, PROOT, ACML, GDB, MONO, IPM, AUTODOCKVINA, MPFUN90, NIX, GNU , DAR, CDF, PARI/GP, SIESTA, NINJA, QD, ORCA, RUBY, CHARM++, YT, MKL, LDWRAPPER, VALGRIND, MERCURIAL, ARPACK-NG, GEANT4, VMD

Current system state details Graphs

Recent System Notices

Status Status Notes
Mar 29 2019, 11:07AM
(23 days ago)

Monk has been decommissioned and is no longer available.

Feb 21 2019, 02:48PM
(about 1 month ago)

Due to their age, the legacy clusters orca, windeee, goblin, shadowfax, copper, and monk will be decommissioned on March 29, 2019. Some nodes from goblin, shadowfax, and copper will be transferred to a new cluster, the affected contributors will be contacted separately.

After March 29 any data you have in /scratch on these clusters will become inaccessible; anything you want to keep should be copied elsewhere as soon as possible. For orca, data in /home and /project is shared with Graham and will be unaffected. For the other clusters, data in /home and /work will remain accessible for a few months via dtn.sharcnet.ca.

If you have not already done so we strongly recommend that you move all of your computing to the new national systems Graham, Cedar and Niagara. See this URL for help: https://docs.computecanada.ca/wiki/Getting_Started

Feb 05 2019, 12:05PM
(2 months ago)

One of the legacy global filesystems will be migrated to new hardware on Wednesday February 20th. To complete this we must unmount the filesystem from all clusters and prevent jobs from running during the outage.

All legacy clusters will be configured to avoid running any jobs after 3pm on February 19.

We expect all legacy clusters to return to service the following day at 10am.

This outage does not affect Graham or Orca.

Jul 09 2018, 04:01PM
(10 months ago)

Cluster is back up after a failure of the cooling system was resolved.

Jul 05 2018, 02:58PM
(10 months ago)

Cluster is down due to a failure of the cooling system. Maintenance technicians have advised that the cooling system cannot handle the recent very hot weather and have requested that we keep the system shut down until Monday July 9.

Sign-in to get full status history