From Documentation
Jump to: navigation, search
requin
Hostname: requin.sharcnet.ca
Target usage: Large, tightly-coupled MPI jobs
System information: see requin system page in web portal
System status: see requin status page
Real time system data: see Ganglia monitoring page
Full list of SHARCNET systems


System Overview

Spec Info Remarks
Cores (CPU) AMD Opteron 2.6GHz
Cores/node 2
Memory/node 8 GB 4GB (login/admin)
Interconnect Quadrics Elan4
Storage 70 TB
OS HP Linux XC 3.2.1
Max. Jobs 5000

For system notice/history, please visit the Requin system page in the SHARCNET web portal.

System Access and User Environment

Login Nodes

Requin login nodes provide standard Sharcnet resources, with a 1-hour CPU limit and 3GB virtual memory limit.

Storage

Each node provides 160 GB of local storage (/tmp) and there is 70 TB of shared space available as/oldwork and /scratch.

Submitting Jobs

Requin currently uses an older scheduler, which does not treat job memory use as a schedulable resource. Fortunately, nodes have only 2 CPUs, and the scheduler never shares a node between two parallel jobs, so this is rarely of concern.

Submit jobs as normal, by using sqsub (without an --mpp parameter).

Note that on Requin, the test queue is fully enabled.