From Documentation
Jump to: navigation, search
Line 1: Line 1:
 
[[File:Graham1.png|400px]]  [[File:Graham3.png|400px]]  [[File:Graham2.png|330px]]
 
[[File:Graham1.png|400px]]  [[File:Graham3.png|400px]]  [[File:Graham2.png|330px]]
  
Graham is the SHARCNET's newest and by far the most powerful supercomputer (cluster), which is being built on the University of Waterloo campus. It should become available to the users from SHARCNET and more generally Compute Canada in late May or early June of 2017. It is also known as GP3, and is a part of the major renewal of academic supercomputers in Canada in 2017, with the other new systems being Arbutus (GP1) in University of Victoria, Cedar (GP2) in Simon Fraser University, and Niagara in the University of Toronto.
+
Graham is the SHARCNET's newest and by far the most powerful supercomputer (cluster), which is being built on the University of Waterloo campus. It should become available to the users from SHARCNET and more generally Compute Canada in late May or early June of 2017. It is also known as GP3, and is a part of the major renewal of academic supercomputers in Canada in 2017, with the other new systems being Arbutus (GP1) in University of Victoria, Cedar (GP2) in Simon Fraser University, and Niagara (LP) in the University of Toronto.
  
 
== Quick facts ==
 
== Quick facts ==

Revision as of 14:59, 8 May 2017

Graham1.png Graham3.png Graham2.png

Graham is the SHARCNET's newest and by far the most powerful supercomputer (cluster), which is being built on the University of Waterloo campus. It should become available to the users from SHARCNET and more generally Compute Canada in late May or early June of 2017. It is also known as GP3, and is a part of the major renewal of academic supercomputers in Canada in 2017, with the other new systems being Arbutus (GP1) in University of Victoria, Cedar (GP2) in Simon Fraser University, and Niagara (LP) in the University of Toronto.

Quick facts

  • Number of CPU cores: 32,168
  • Number of nodes: 1043
  • Total memory (RAM): 149 TB (4.6 GB/core on average)
  • Number of NVIDIA P100 GPUs: 320
  • Networking: EDR (cpu nodes) and FDR (GPU nodes) InfiniBand

Useful links