|Target usage: Large, tightly-coupled MPI jobs|
|System information: see requin system page in web portal|
|System status: see requin status page|
|Real time system data: see Ganglia monitoring page|
|Full list of SHARCNET systems|
|Cores (CPU)||AMD Opteron 2.6GHz|
|Memory/node||8 GB||4GB (login)|
|OS||HP Linux XC 3.2.1|
For system notice/history, please visit the Requin system page in the SHARCNET web portal.
System Access and User Environment
Requin login nodes have the standard settings, including limits on memory, number of logins and cpu time.
Each node contains 160 GB of local storage (/tmp) and there is 70 TB of space available as Requin-local /work and /scratch.
Requin currently uses an older scheduler infrastructure, which does not treat job memory use as a schedulable resource. Fortunately, nodes have only 2 CPUs, and the scheduler never shares a node between two parallel jobs, so this is rarely of concern.
Otherwise, sqsub and related tools behave as normal.
Note that requin is the only cluster that features a pre-emptive test queue, such that if you submit your jobs with sqsub -t ... they will start almost immediately. Note that users may only submit 1 job to this queue at a time, and that jobs running in this queue are limited to a duration of 1 hour.