SIGN-IN

SMP iqaluk.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer Dell
Operating System CentOS 6
Interconnect NUMA
Total processors/cores 32
Nodes
iqaluk: 1
32 cores
4 sockets x 8 cores per socket
Intel Xeon X7500 @ 2.0 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: None
Total attached storage 10.7 TB
Suitable use

threaded, large-memory
Note: This system is contributed by a research group. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET.

Software available

FDTD, GCC, UTIL, GSL, OCTAVE, MPFUN2015, PETSC_SLEPC, FFTW, SAMTOOLS, BIOPERL, OPENCV, EMACS, INTEL, VIM, NETCDF, MPFR, PYTHON, BIOSAMTOOLS, BINUTILS, BOOST, SPARK, CPAN, FREEFEM++, NCL, NCBIC++TOOLKIT, GHC, R, PERL, BIOPERLRUN, MrBAYES, GMP, SYSTEM, SPRNG, SUBVERSION, OPEN64, MPC, PNETCDF, OPENMPI, BLAST, GNUPLOT, COREUTILS, TEXLIVE, GIT, GAUSSIAN, LLVM, OPENJDK, CMAKE, HDF, SUPERLU, TINKER, PROOT, ACML, GDB, MODE, GDL, IPM, AUTODOCKVINA, MPFUN90, NIX, GAP, GNU , DAR, CDF, PARI/GP, SIESTA, PATHSCALE, NINJA, QD, ORCA, RUBY, CHARM++, YT, PGI, MKL, LDWRAPPER, VALGRIND, MERCURIAL, MATLAB, ILOGCPLEX, ARPACK-NG, GEANT4

Current system state details Graphs

Recent System Notices

Status Status Notes
Feb 05 2019, 12:05PM
(3 months ago)

One of the legacy global filesystems will be migrated to new hardware on Wednesday February 20th. To complete this we must unmount the filesystem from all clusters and prevent jobs from running during the outage.

All legacy clusters will be configured to avoid running any jobs after 3pm on February 19.

We expect all legacy clusters to return to service the following day at 10am.

This outage does not affect Graham or Orca.

Jan 24 2019, 06:33PM
(3 months ago)

Wobbie and Iqaluk have been operating normally.

Sep 26 2018, 12:18PM
(7 months ago)

There will be another power outage at the McMaster datacenter, at 7am on September 29. All systems will be shut down before this time (so any jobs running at the time will be killed.)

Sep 05 2018, 02:53PM
(8 months ago)

please report any problems you notice to help@sharcnet.ca.

Sep 05 2018, 01:30PM
(8 months ago)

There’s been a fibre cut (in London, around 11am) which has killed both links from McMaster to the Sharcnet WAN. As a result, several clusters are unreachable (in general, jobs will hang in this condition, since they won’t be able to do IO to global storage resources.)

This sort of thing is usually fixed fairly soon (hours); we’ll update when we know more.

Sign-in to get full status history