SHARCNET Cloud will be powered off and decommissioned on June 1, 2021, are you ready?
SIGN-IN

Cluster dusky.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 7.6
Interconnect 10G
Total processors/cores 1316
Nodes
1‑20
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3 @ 2.3 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 500 GB
27
32 cores
4 sockets x 8 cores per socket
Xeon(R) CPU E5-4620 @ 2.2 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 140 GB
28‑31
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-262 @ 2.0 GHz
Type: Compute
Memory: 256.0 GB
Local storage: 140 GB
32‑39
16 cores
2 sockets x 8 cores per socket
Xeon(R) CPU E5-2630 @ 2.4 GHz
Type: Compute
Notes: 8 x NVIDIA Tesla K80 GPUs
Memory: 96.0 GB
Local storage: 1.8 TB
40‑59
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3 @ 2.6 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 480 GB
60‑63
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3 @ 2.6 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 GB
76
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-2620 v3 @ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
77
40 cores
2 sockets x 20 cores per socket
Xeon(R) Gold 6148 @ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
Total attached storage 0 Bytes
Suitable use

Note: This system is contributed by research groups. The contributing groups have the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET.

Software available

FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, COMSOL, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4

Recent System Notices

Status Status Notes
Mar 15 2021, 01:42PM
(28 days ago)

The cluster is back up after the cooling maintenance. Technicians were unable to complete the maintenance because the wrong parts were shipped by the manufacturer. Maintenance will be rescheduled.

Mar 15 2021, 09:15AM
(28 days ago)

The cluster is down to allow technicians to perform preventative maintenance on the data centre cooling system.

It is expected to return to service sometime on Wednesday 17 March.

Mar 04 2021, 10:26AM
(about 1 month ago)

The cluster will be down for 48 hours starting 10am Monday March 15 to allow technicians to perform preventative maintenance on the data centre cooling system.

No running jobs should be lost. Scheduled jobs will not be started unless they will complete before the downtime.

Sep 07 2020, 12:24PM
(7 months ago)

Cluster is available again after a problem with the /project filesystem

Sep 07 2020, 12:02PM
(7 months ago)

cluster is unavailable due to a problem with /project – we are investigating

Sign-in to get full status history