From Documentation
Revision as of 13:54, 13 June 2017 by Jdesjard (Talk | contribs)

Jump to: navigation, search

Graham1.png Graham3.png Graham2.png

Graham is targeted to be available for users starting during the week of June 19th, 2017:

Graham is the largest and by far the most powerful cluster among the current SHARCNET fleet of supercomputers. Graham is also known as GP3, and is a part of the major renewal of academic supercomputers in Canada in 2017, with the other new systems being Arbutus (GP1) in the University of Victoria, Cedar (GP2) in Simon Fraser University, and Niagara (LP) in the University of Toronto. A SHARCNET system notice will be sent to all users when Graham is ready for access. SHARCNET users will be able to login to this system at with their Compute Canada username and password. In the meantime several resources have been put into place for users to familiarise themselves with the usage of this new system.

General information about migrating work from existing systems to the new national general purpose systems is available on the Compute Canada Wiki page at:

Properties of the system including its address, node composition and file systems, etc. can be found on the Compute Canada Wiki page at:

Instructions for running jobs via the Slurm scheduler are available on the Compute Canada Wiki page at:

A recent SHARCNET General Interest Webinar describes what to expect from the new systems and demonstrates some important usage differences from other SHARCNET systems. A recording of this webinar is available at the SHARCNET YouTube channel:

Short introductory video recordings covering different aspects of the new national general purpose clusters are available as a playlist at the Compute Canada YouTube channel:

Once that the Graham system is available SHARCNET staff will be presenting daily demonstrations of basic workflow on Graham in the SN-Seminars Vidyo room at:

Following a brief usage demonstration the support staff will stay online for the remainder of the hour to discuss access and usage topics relating to the Graham system. These live demonstrations/discussions will be posted with other SHARCNET events on the calendar at:

For support request relating to the Graham system email or .

Quick facts

  • Number of CPU cores: 32,168
  • Number of nodes: 1043
  • Total memory (RAM): 149 TB (4.6 GB/core on average)
  • Number of NVIDIA P100 GPUs: 320
  • Networking: EDR (cpu nodes) and FDR (GPU nodes) InfiniBand

Useful links