Cluster dusky.sharcnet.ca
Links |
System documentation in the SHARCNET Help Wiki
|
Manufacturer |
HP |
Operating System |
CentOS 7.6 |
Interconnect |
10G |
Total processors/cores |
1756 |
Nodes |
1‑20
|
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3
@ 2.3 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 500 GB
|
27
|
32 cores
4 sockets x 8 cores per socket
Xeon(R) CPU E5-4620
@ 2.2 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 140 GB
|
28‑31
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-262
@ 2.0 GHz
Type: Compute
Memory: 256.0 GB
Local storage: 140 GB
|
32‑39
|
16 cores
2 sockets x 8 cores per socket
Xeon(R) CPU E5-2630
@ 2.4 GHz
Type: Compute
Notes: 8 x NVIDIA Tesla K80 GPUs
Memory: 96.0 GB
Local storage: 1.8 TB
|
40‑59
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 480 GB
|
60‑63
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 GB
|
76
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-2620 v3
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
77
|
40 cores
2 sockets x 20 cores per socket
Xeon(R) Gold 6148
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
78
|
32 cores
2 sockets x 16 cores per socket
Intel(R) Xeon(R) Silver 4314
@ 2.4 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 TB
|
80‑96
|
24 cores
2 sockets x 12 cores per socket
Intel(R) Xeon(R) Gold 5317
@ 3.0 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 480 TB
|
|
Total attached storage |
220 TB |
Suitable use |
Note: This system is contributed by research groups. The contributing groups have the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET. |
Software available |
FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4 |
Recent System Notices
Status
|
Status
|
Notes
|
Aug 11 2023, 11:21AM
(about 1 month ago)
|
The cluster is back in service after the cooling system failure.
|
Aug 10 2023, 03:38PM
(about 1 month ago)
|
The entire cluster remains shut down due to a cooling failure. Technicians are performing some final changes tomorrow morning so we now expect service to be restored by midday Friday (August 11).
|
Aug 08 2023, 02:42PM
(about 1 month ago)
|
The entire cluster remains shut down due to a cooling failure. Technicians are performing maintenance today and tomorrow and we expect service to be restored Thursday (August 10).
|
Aug 06 2023, 10:47AM
(about 1 month ago)
|
The entire cluster is shut down due to a cooling failure. Because of the nature of the failure this outage is expected to extend into next week, we’ll provide an estimate of return to service whenever we can.
|
Feb 13 2023, 10:08AM
(7 months ago)
|
The /data and /scratch filesystems recovered after the hardware was power-cycled. We are continuing to monitor for anomalies.
|
Sign-in to get full status history