SIGN-IN

Cluster kraken.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 5.4
Interconnect Myrinet 2g (gm)
Total processors/cores 1968
Nodes
narwhal: 1‑267
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: Compute nodes.
Memory: 8.0 GB
Local storage: 30 GB
bull: 1‑96
4 cores
4 sockets x 1 core per socket
AMD Opteron @ 2.4 GHz
Type: Compute
Notes: N/A
Memory: 32.0 GB
Local storage: 150 GB
bull: 128‑159
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
bull: 301‑396
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
kraken: 240
2 cores
AMD Opteron @ 2.2 GHz
Type: Admin
Memory: 8.0 GB
Local storage: 80 GB
kraken: 241
2 cores
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 4.0 GB
Local storage: 160 GB
Total attached storage 2.73 TB
Suitable use

Throughput clusters, an amalgamation of older point-of-presence and throughput clusters, suitable for serial applications, and small-scale low latency demanding parallel MPI applications.

Software available

MAPLE, MrBAYES, COREUTILS, CP2K, SIESTA, CMAKE, NETCDF, IMSL, GIT, OPENJDK, UTIL, DDT, R, MPFUN90, PARI/GP, OPENMPI, BLAST, OCTAVE, SPRNG, HDF, ACML, GCC, MERCURIAL, DAR, SUBVERSION, PERL, INTEL, PETSC_SLEPC, ORCA, GNUPLOT, CDF, SAMTOOLS, GMP, MKL, BIOPERL, NCL, GNU , BOOST, OPEN64, PYTHON, BIOSAMTOOLS, MPFUN2015, QD, VIM, TEXLIVE, RLWRAP, VALGRIND, YT, FDTD, CHARM++, MPFR, SUPERLU, PNETCDF, IPM, GROMACS, GSL, BINUTILS, MPC, BIOPERLRUN, SQ, ILOGCPLEX, PGI, OPENCV, LDWRAPPER, ARPACK-NG, EMACS, CPAN, RUBY, NIX, PROOT, GHC, ANSYS, AUTODOCKVINA, GEANT4, NCBIC++TOOLKIT

Current system state details Graphs

Recent System Notices

Status Status Notes
Aug 28 2017, 09:49AM
(about 1 month ago)

Cluster has been decommissioned and is no longer available.

Aug 04 2017, 02:01PM
(2 months ago)

Cluster has been recovered after the power outage.

Now that Graham is fully operational, Kraken will be completely decommissioned at 3pm Friday August 11. Jobs will continue to run normally until that time, but after that time any queued or running jobs will be lost.

Aug 03 2017, 11:36AM
(3 months ago)

There is a power failure at UWO. All running jobs were killed. Storage for some /work and /home users are currently still available. If power isn’t restored in ~ 30min they will also have to be shutdown.

Apr 25 2017, 02:59PM
(6 months ago)

All of kraken’s nodes based at Guelph (all the nodes named “nar#”) will be decommissioned as if this Friday, April 29. Our apologies for the short notice.

Apr 20 2017, 12:49PM
(6 months ago)

Most of the cluster nodes affected by the power outage have been recovered, we are still working on a few remaining.

Sign-in to get full status history