|Target usage: serial and/or small threaded/mpi jobs|
|System information: see guppy system page in web portal|
|System status: see guppy status page|
|Real time system data: see Ganglia monitoring page|
|Full list of SHARCNET systems|
Guppy is a contributed system, which means that the contributors have priority access to the system. Note that contributor jobs may have higher priority in job execution of the job scheduler, delaying them for up to 7 days. If your software uses licenses you should check that your jobs are not being suspended and tying up licenses unduly.
|CPU Model||Intel Xeon 2.67 GHz||Intel Xeon 2.8 GHz|
|Memory/node||24 GB||24 GB|
|/scratch Storage||8.79 TB|
For system notice/history, please visit the Guppy system page in the SHARCNET web portal.
Note that Brown is a contributed system, open to all SHARCNET users, but with higher priority for groups which have donated the system.
System Access and User Environment
[isaac@gup-hn:~] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 212992 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 8192 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) unlimited cpu time (seconds, -t) 3600 max user processes (-u) 100 virtual memory (kbytes, -v) 1000000 file locks (-x) unlimited
Please note that login nodes are only to be used for short computations that do not require a lot of resources. To ensure this, some of the resource limits on login nodes have been set to low values. If you want to see your limits, please execute:
In order to change your limits, please do:
ulimit -v 2000000
which sets the virtual memory to 2GB.
In general users will experience the best performance on saw by ensuring that their jobs use whole nodes. Some measurements have shown that when MPI jobs are sharing nodes with other jobs that they slow down depending on resource contention.
This means that in general MPI jobs should use multiples of 8 cores. Threaded jobs can run up to 8 cores as saw has all 8 cores Xeon nodes.
When submitting MPI jobs, one should use the -N and -n flags to ensure a job is schedule to full nodes. For example, if your program is going to use 64 processes, one would submit it as:
sqsub -q mpi -n 64 -N 8 <...>
It is important to include -N 8 to ensure the job is not scattered on nodes where other user's jobs are running.