SIGN-IN

Cluster goblin.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer Sun
Operating System CentOS 6.3
Interconnect Gigabit Ethernet
Total processors/cores 648
Nodes
goblin: 1‑15
4 cores
2 sockets x 2 cores per socket
AMD Opteron @ 2.4 GHz
Type: Compute
Notes: N/A
Memory: 8.0 GB
Local storage: 160 GB
goblin: 16‑20
16 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.53 GHz
Type: Compute
Notes: This is an 8-core node with hyperthreading on. The hyperthreading makes the node look as if it had 16 cores. The scheduler has been configured to use only 8 cores.
Memory: 12.0 GB
Local storage: 0 Bytes
goblin: 21‑36
8 cores
2 sockets x 4 cores per socket
Intel Xeon @ 2.27 GHz
Type: Compute
Notes: This is a subsystem contributed by Dr. Peter Rogan of Western University, dedicated to human genome research.
Memory: 48.0 GB
Local storage: 1000 GB
goblin: 37‑48
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.6 GHz
Type: Compute
Notes: This is a subsystem contributed by Dr. Lance Lochner et al of Western University, dedicated to the research in economics.
Memory: 64.0 GB
Local storage: None
goblin: 49
12 cores
2 sockets x 6 cores per socket
Intel Xeon @ 2.3 GHz
Type: Compute
Notes: This system has two Intel Xeon Phi coprocessors installed with 60 cores each, running in total 240 threads.
Memory: 32.0 GB
Local storage: 500 GB
goblin: 50
32 cores
4 sockets x 8 cores per socket
Intel Xeon @ 2.2 GHz
Type: Compute
Notes: This system is purchased and contributed by Dr. Lucian Ilie of computer science department at Western University for research in bioinformatics and mathematics.
Memory: 1024.0 GB
Local storage: 150 GB
goblin: 51‑54
12 cores
2 sockets x 6 cores per socket
Intel Xeon @ 2.0 GHz
Type: Compute
Notes: These servers were purchased and contributed by Dr. Lucian Ilie of computer science department at Western University for research in bioinformatics and mathematics.
Memory: 256.0 GB
Local storage: 150 GB
Total attached storage None
Suitable use

Note: This system is contributed by a research group. The contributing group has the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing group have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET.

Software available

FDTD, GCC, UTIL, GSL, OCTAVE, INTEL, MPFUN2015, PETSC_SLEPC, FFTW, SQ, SAMTOOLS, BIOPERL, OPENCV, EMACS, OPENMPI, VIM, NETCDF, MPFR, GROMACS, PYTHON, BIOSAMTOOLS, BINUTILS, CPAN, IMSL, FREEFEM++, ABAQUS, NCL, NCBIC++TOOLKIT, GHC, R, PERL, MPIBLAST, BIOPERLRUN, MrBAYES, GMP, SYSTEM, SPRNG, SUBVERSION, BOOST, OPEN64, MPC, PNETCDF, ESPRESSO, GAUSSIAN, BLAST, GNUPLOT, COREUTILS, TEXLIVE, GIT, LLVM, ADF/BAND, OPENJDK, CMAKE, HDF, SUPERLU, TINKER, PROOT, ACML, GDB, MONO, MATLAB, IPM, AUTODOCKVINA, MPFUN90, NIX, GNU , MAPLE, DAR, CDF, PARI/GP, COMSOL, SIESTA, NINJA, QD, ORCA, RUBY, CHARM++, YT, PGI, MKL, LDWRAPPER, VALGRIND, MERCURIAL, RLWRAP, ILOGCPLEX, ARPACK-NG, GEANT4

Recent System Notices

Status Status Notes
Nov 07 2019, 09:59AM
(30 days ago)

This cluster has been decommissioned and is no longer available.

Oct 30 2019, 12:23PM
(about 1 month ago)

5 nodes of goblin (gb[50-54]) have been shut down and are being moved to dusky. We hope to complete this by the end of business today.

All the remaining nodes of goblin will be decommissioned Wednesday November 6 and the cluster will be permanently shut down.

Oct 25 2019, 09:22AM
(about 1 month ago)

5 nodes of goblin (gb[50-54]) will be moved from goblin to dusky starting at midday October 30.

All the remaining nodes of goblin will be decommissioned Wednesday November 6 and the cluster will be permanently shut down.

Oct 16 2019, 12:01PM
(about 1 month ago)

Cluster is back up after the brief unscheduled power outage.

Oct 16 2019, 11:48AM
(about 1 month ago)

Goblin, dusky and copper are currently down after a brief unscheduled power outage. We are working on recovering them.

Sign-in to get full status history