SIGN-IN

Cluster dusky.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 7.6
Interconnect 10G
Total processors/cores 1756
Nodes
1‑20
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3 @ 2.3 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 500 GB
27
32 cores
4 sockets x 8 cores per socket
Xeon(R) CPU E5-4620 @ 2.2 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 140 GB
28‑31
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-262 @ 2.0 GHz
Type: Compute
Memory: 256.0 GB
Local storage: 140 GB
32‑39
16 cores
2 sockets x 8 cores per socket
Xeon(R) CPU E5-2630 @ 2.4 GHz
Type: Compute
Notes: 8 x NVIDIA Tesla K80 GPUs
Memory: 96.0 GB
Local storage: 1.8 TB
40‑59
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3 @ 2.6 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 480 GB
60‑63
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3 @ 2.6 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 GB
76
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-2620 v3 @ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
77
40 cores
2 sockets x 20 cores per socket
Xeon(R) Gold 6148 @ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
78
32 cores
2 sockets x 16 cores per socket
Intel(R) Xeon(R) Silver 4314 @ 2.4 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 TB
80‑96
24 cores
2 sockets x 12 cores per socket
Intel(R) Xeon(R) Gold 5317 @ 3.0 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 480 TB
Total attached storage 220 TB
Suitable use

Note: This system is contributed by research groups. The contributing groups have the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET.

Software available

FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4

Recent System Notices

Status Status Notes
Feb 27 2024, 11:21AM
(6 days ago)

The cooling system has been repaired and the cluster is fully operational.

Feb 22 2024, 07:10AM
(11 days ago)

The cooling system has failed again, technicians will be looking at it this morning. Cluster will not be restarted this morning as originally planned. We’ll post another update when we know more.

Feb 21 2024, 01:43PM
(12 days ago)

The cooling system has been repaired, but the technicians want to let it run without load for a while to confirm stability. Provided that all goes well we expect to begin restarting the cluster tomorrow morning and it should be running normally by 10am.

Feb 17 2024, 10:13AM
(16 days ago)

The cluster is current down due to a cooling system failure. Running jobs have been lost, queued jobs will remain in the queue. It is expected to return to service no earlier than Tuesday February 20, depending on the technicians’ assessment of the problem.

Aug 11 2023, 11:21AM
(7 months ago)

The cluster is back in service after the cooling system failure.

Sign-in to get full status history