From Documentation
Jump to: navigation, search
copper
Hostname: copper.sharcnet.ca
Target usage: threaded/mpi jobs, GPU jobs, large memory jobs
System information: see copper system page in web portal
System status: see copper status page
Real time system data: see Ganglia monitoring page
Full list of SHARCNET systems


Hardware

  • nodes 1 - 8
    • 16 CPU cores (Dual Intel Xeon E5-2630 v3 @ 2.4 GHz)
    • 4 NVIDIA Tesla K80 GPU cards (8 Kepler GK210 GPU devices/chips, Compute capability: 3.7)
    • Memory: 96.0 GB
    • Run time limited to four (4) hours for non-contribution users.
  • nodes 9 - 28
    • 24 CPU cores (Dual Intel Xeon E5-2690 v3 @ 2.6 GHz)
    • Memory: 64.0 GB
    • Run time limited to four (4) hours for non-contribution users.
  • nodes 29 - 32
    • 24 CPU cores (Dual Intel Xeon E5-2690 v3 @ 2.6 GHz)
    • Memory: 128.0 GB
    • Non-contributed nodes, contact us to get access
  • node 33
    • 12 CPU cores (Dual Intel Xeon E5-2620 v3 @ 2.4GHz)
    • Memory: 768.0GB
    • Contributor access only

Usage

Copper is a contributed cluster (with 4 non-contributed nodes), and the contributors have a higher priority for jobs. Users of non-contributing groups have their job runtime restricted to 4 hours on contributed nodes.

Copper uses SLURM as scheduler, but jobs can still be submitted with the standard sqsub script.

Job submission examples:

1 CPU, 1 GPU:

sqsub -q gpu -r 5m --mpp=11g  -o copper1.txt ./test.x

1 CPU, 4 GPU:

sqsub -q gpu --mpp=45g --gpp=4 -r 5m -o copper4.txt ./test.x

1 CPU, 8 GPU:

sqsub -q gpu --mpp=90g --gpp=8 -r 5m -o copper8.txt ./test.x