Transparent_banner
home || sign-in || register ||

Cluster kraken.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 5.4
Interconnect Myrinet 2g (gm)
Total processors/cores 1968
Nodes
narwhal: 1‑267
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: Compute nodes.
Memory: 8.0 GB
Local storage: 30 GB
bull: 1‑96
4 cores
4 sockets x 1 core per socket
AMD Opteron @ 2.4 GHz
Type: Compute
Notes: N/A
Memory: 32.0 GB
Local storage: 150 GB
bull: 128‑159
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
bull: 301‑396
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
kraken: 240
2 cores
AMD Opteron @ 2.2 GHz
Type: Admin
Memory: 8.0 GB
Local storage: 80 GB
kraken: 241
2 cores
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 4.0 GB
Local storage: 160 GB
Total attached storage 2.73 TB
Suitable use

Throughput clusters, an amalgamation of older point-of-presence and throughput clusters, suitable for serial applications, and small-scale low latency demanding parallel MPI applications.

Software available

MAPLE, MrBAYES, COREUTILS, CP2K, MATLAB, NETCDF, SIESTA, ANSYS, CMAKE, PYTHON, IMSL, GEANT4, GIT, OPENJDK, UTIL, HDF, DDT, R, MPFUN, PARI/GP, OPENMPI, BLAST, INTEL, OCTAVE, SPRNG, ACML, GCC, MERCURIAL, DAR, SUBVERSION, PERL, PETSC_SLEPC, AMBER, ORCA, GNUPLOT, CDF, SAMTOOLS, GMP, MKL, FFTW, BOOST, BIOPERL, GNU , PNETCDF, ARPACK-NG, OPEN64, BIOSAMTOOLS, QD, ABINIT, VIM, TEXLIVE, RLWRAP, IPM, VALGRIND, YT, MPC, FDTD, CHARM++, MPFR, SUPERLU, GROMACS, GSL, BINUTILS, SQ, ILOGCPLEX, PGI, OPENCV

Current system state details Graphs

Recent System Notices

Status Status Notes
May 08 2015, 01:39PM
(21 days ago)

The nodes that were shut down for cooling maintenance recently are once again due due to a cooling system failure. Any jobs running on those nodes will be lost, sorry for the inconvenience. Facilities maintenance have been contacted.

May 08 2015, 07:43AM
(21 days ago)

The nodes that were shut down for cooling maintenance are back online.

Apr 27 2015, 10:56AM
(about 1 month ago)

Some nodes will be shut down on May 4 at 4pm for 24 hours to allow cooling system maintenance. The scheduler has been configured to prevent jobs from starting on these nodes if they will not complete before the shutdown so no action is required on your part.

Apr 07 2015, 08:06AM
(about 1 month ago)

Nodes are back up after the scheduled power outage at 7:15 this morning.

Apr 07 2015, 07:02AM
(about 1 month ago)

Some nodes are down for a scheduled power outage at 7:15 this morning. Power will be back at 7:30 and systems will be restarted as soon as possible after that.

Sign-in to get full status history