Cluster dusky.sharcnet.ca
Links |
System documentation in the SHARCNET Help Wiki
|
Manufacturer |
HP |
Operating System |
CentOS 7.6 |
Interconnect |
10G |
Total processors/cores |
1316 |
Nodes |
1‑20
|
24 cores
2 sockets x 12 cores per socket
Xeon E5-2670v3
@ 2.3 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 500 GB
|
27
|
32 cores
4 sockets x 8 cores per socket
Xeon(R) CPU E5-4620
@ 2.2 GHz
Type: Compute
Memory: 1024.0 GB
Local storage: 140 GB
|
28‑31
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-262
@ 2.0 GHz
Type: Compute
Memory: 256.0 GB
Local storage: 140 GB
|
32‑39
|
16 cores
2 sockets x 8 cores per socket
Xeon(R) CPU E5-2630
@ 2.4 GHz
Type: Compute
Notes: 8 x NVIDIA Tesla K80 GPUs
Memory: 96.0 GB
Local storage: 1.8 TB
|
40‑59
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 64.0 GB
Local storage: 480 GB
|
60‑63
|
24 cores
2 sockets x 12 cores per socket
Xeon(R) CPU E5-2690 v3
@ 2.6 GHz
Type: Compute
Memory: 128.0 GB
Local storage: 480 GB
|
76
|
12 cores
2 sockets x 6 cores per socket
Xeon(R) CPU E5-2620 v3
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
77
|
40 cores
2 sockets x 20 cores per socket
Xeon(R) Gold 6148
@ 2.4 GHz
Type: Compute
Memory: 768.0 GB
Local storage: 5 TB
|
|
Total attached storage |
0 Bytes |
Suitable use |
Note: This system is contributed by research groups. The contributing groups have the benefit of access to the resources on a preferential basis as determined on a "best efforts" basis by the SHARCNET system administrator. Jobs submitted by contributing groups have a higher priority than others. For the policies on the contribution of systems, please refer to Contribution of Computational Assets to SHARCNET. |
Software available |
FDTD, GDB, GCC, MONO, OCTAVE, R, INTEL, FFTW, OPENCV, EMACS, OPENMPI, AUTODOCKVINA, LSDYNA, PYTHON, BOOST, ABAQUS, SPARK, PERL, NINJA, SYSTEM, LAMMPS, LLVM, BINUTILS, ESPRESSO, BLAST, SUBVERSION, GIT, ACML, NCBIC++TOOLKIT, GEANT4 |
Recent System Notices
Status
|
Status
|
Notes
|
May 02 2022, 03:09PM
(14 days ago)
|
|
All compute nodes are back up after cooling maintenance.
|
Apr 20 2022, 10:06AM
(26 days ago)
|
|
All compute nodes will be down for maintenance on the room cooling system on Monday May 2 starting at 9am and is expected to return to service before 5pm. The scheduler has been configured to prevent jobs starting if they would overlap with this period.
|
Nov 18 2021, 10:34AM
(6 months ago)
|
|
The /project filesystem is available again.
|
Nov 18 2021, 10:06AM
(6 months ago)
|
|
/project is currently unavailable/hanging due to an upstream filesystem problem. We’re working on a solution.
|
Nov 17 2021, 01:20PM
(6 months ago)
|
|
The cluster is back up after the upgrade of the scheduler. Please report any problems to help@sharcnet.ca
|
Sign-in to get full status history