Transparent_banner
home || sign-in || register ||

Cluster kraken.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 5.4
Interconnect Myrinet 2g (gm)
Total processors/cores 1968
Nodes
narwhal: 1‑267
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: Compute nodes.
Memory: 8.0 GB
Local storage: 30 GB
bull: 1‑96
4 cores
4 sockets x 1 core per socket
AMD Opteron @ 2.4 GHz
Type: Compute
Notes: N/A
Memory: 32.0 GB
Local storage: 150 GB
bull: 128‑159
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
bull: 301‑396
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 80 GB
kraken: 240
2 cores
AMD Opteron @ 2.2 GHz
Type: Admin
Memory: 8.0 GB
Local storage: 80 GB
kraken: 241
2 cores
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 4.0 GB
Local storage: 160 GB
Total attached storage 2.73 TB
Suitable use

Throughput clusters, an amalgamation of older point-of-presence and throughput clusters, suitable for serial applications, and small-scale low latency demanding parallel MPI applications.

Software available

MAPLE, MrBAYES, COREUTILS, CP2K, MATLAB, SIESTA, CMAKE, PYTHON, NETCDF, IMSL, GEANT4, GIT, OPENJDK, UTIL, DDT, R, MPFUN90, PARI/GP, OPENMPI, BLAST, OCTAVE, SPRNG, HDF, ACML, GCC, MERCURIAL, DAR, SUBVERSION, PERL, INTEL, PETSC_SLEPC, AMBER, ORCA, GNUPLOT, CDF, SAMTOOLS, GMP, MKL, FFTW, BIOPERL, GNU , BOOST, OPEN64, BIOSAMTOOLS, MPFUN2015, QD, VIM, TEXLIVE, RLWRAP, VALGRIND, YT, FDTD, CHARM++, MPFR, SUPERLU, PNETCDF, IPM, GROMACS, GSL, BINUTILS, MPC, BIOPERLRUN, SQ, ILOGCPLEX, PGI, OPENCV, LDWRAPPER, ARPACK-NG, CPAN, RUBY, NIX, PROOT, ECLIPSE, GHC, ANSYS

Current system state details Graphs

Recent System Notices

Status Status Notes
Jun 19 2016, 03:59PM
(12 days ago)

Several UWO-based systems may have limited service due to cooling failures, including some global work systems, goblin and kraken. We hope to keep service disruption to a minimum.

Jun 17 2016, 03:54PM
(14 days ago)

Our apologies, but some of the “bul” nodes have been shut down due to more cooling problems in the data centre. Jobs on those nodes have been lost.

May 27 2016, 10:26AM
(about 1 month ago)

Cooling repairs have been completed and the “bul” nodes that were shut down due to a failed cooling system are now running again.

May 25 2016, 04:30PM
(about 1 month ago)

Some “bul” nodes are shut down due to a failed cooling system. Local technicians are working on the problem and expect to resolve it tomorrow morning.

May 25 2016, 03:21PM
(about 1 month ago)

We’re doing an emergency shutdown of some “bul” nodes due to a failed cooling system. Some jobs will likely be lost, our apologies in advance.

Sign-in to get full status history