SIGN-IN

Cluster dusky.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.3
Interconnect 10G
Total processors/cores 480
Nodes
1‑20
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3 @ 2.3 GHz
Type: Home
Memory: 64.0 GB
Local storage: 500 GB
Total attached storage 0 Bytes
Suitable use

Note: This system is contributed by a research group. The contributing group has the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing group have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET.

Software available

FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, COMSOL, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4

Recent System Notices

Status Status Notes
Nov 25 2019, 03:33PM
(15 days ago)

Cluster is back up after the brief unscheduled power outage.

Nov 25 2019, 03:14PM
(15 days ago)

Copper and dusky are down due to a brief unscheduled power outage. We are working on recovering them.

Nov 19 2019, 03:58PM
(21 days ago)

We have almost completed recovery of the scheduler database, and we believe that running jobs will complete normally. Configuration for all contributor groups has already completed, but the scheduler may operate slower than normal until full configuration completes.

Nov 19 2019, 11:53AM
(21 days ago)

The cluster scheduler has crashed due to database corruption. We are working on restoring a fresh database but once the fresh database is ready existing running jobs and the records of those jobs will be lost.

We hope to have the scheduler operational again this afternoon.

Nov 18 2019, 04:42PM
(22 days ago)

The cluster scheduler has crashed and we are having some trouble restarting it. Currently-running jobs will continue to run and the login node is working, but no new jobs can be submitted or started.

We are working on the problem.

Sign-in to get full status history