|Target usage: Large, tightly-coupled MPI jobs|
|System information: see requin system page in web portal|
|System status: see requin status page|
|Real time system data: see Ganglia monitoring page|
|Full list of SHARCNET systems|
|Cores (CPU)||AMD Opteron 2.6GHz|
|Memory/node||8 GB||4GB (login/admin)|
|OS||HP Linux XC 3.2.1|
For system notice/history, please visit the Requin system page in the SHARCNET web portal.
System Access and User Environment
Requin login nodes provide standard Sharcnet resources, with a 1-hour CPU limit and 3GB virtual memory limit.
Each node provides 160 GB of local storage (/tmp) and there is 70 TB of shared space available as/oldwork and /scratch.
Requin currently uses an older scheduler, which does not treat job memory use as a schedulable resource. Fortunately, nodes have only 2 CPUs, and the scheduler never shares a node between two parallel jobs, so this is rarely of concern.
Submit jobs as normal, by using sqsub (without an --mpp parameter).
Note that on Requin, the test queue is fully enabled.