From Documentation
Jump to: navigation, search
redfin
Hostname: redfin.sharcnet.ca
Target usage: General purpose with a focus on large-memory parallel applications
System information: see redfin system page in web portal
System status: see redfin status page
Real time system data: see Ganglia monitoring page
Full list of SHARCNET systems


System Overview

Node Range 1-14 15-24
CPU Model AMD Opteron 2.1GHz (6172) AMD Opteron 2.1GHz (6172)
Cores/node 24 24
Memory/node 98 GB 196 GB
Interconnect QDR InfiniBand QDR InfiniBand
/scratch Storage 10 TB 10 TB
OS CentOS 6.3 CentOS 6.3

For system notice/history, please visit the Redfin system page in the SHARCNET web portal.

System Access and User Environment

Login Nodes

[isaac@red-admin:~] ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 127036
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 8192
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) 3600
max user processes              (-u) 100
virtual memory          (kbytes, -v) 1000000
file locks                      (-x) unlimited

General Information

Redfin is a contributed system, open to all SHARCNET users, but with higher priority for groups which have donated the system.

It is very similar to Orca (based on 12-core AMD Opteron Magny Cours CPUs, 2 per node, 24 core per node total, with a 4X QDR Infiniband network) but has more memory per core. There are 24 nodes in the system, for a total of 576 cores.

The first 14 nodes of the system (nodes 1 to 14) have 98GB of memory per node (approximately 4GB/core) while the remaining 10 (nodes 15-24) have double the memory, 196GB (approximately 8GB/core).

Submitting Jobs

Jobs submitted by regular users can only indicate a runtime limit of 4 hours or less ( sqsub -r 4h ... ) to be eligible to run on red[1-14] (as opposed to the standard runtime limit of 7 days). As a consequence of this, any job submitted to the mpi queue requesting more than 240 core, or the equivalent per-core amount of memory (196*10 = 1960G total) has to indicate a maximum runtime of 4h or less to be eligible to start.

Note that some of the nodes on the cluster are reserved for the contributing group, and cannot run normal user's jobs.

Local Storage

Redfin has 10TB of local nfs storage which is provided as the /scratch filesystem to each node.

Each redfin node has 57GB of temporary storage provided at /tmp . This should only be accessed when necessary, by running jobs. Please contact us if you think you need to use it rather than /scratch or /work .

System-specific Usage Concerns

OpenMPI "Cannot allocate memory"

If your MPI job uses a significant amount of memory and is communication intensive you may see your job fail with a message similar to the following:

[red7][[42101,1],161][../../../../../openmpi-1.4.2/ompi/mca/btl/openib/connect/btl_openib_connect_oob.c:464:qp_create_one] error creating qp errno says Cannot allocate memory

In this case you should submit your job such that OpenMPI is told "unpin" memory. This requires passing "--nompirun" to sqsub and specifying the direct path to mpirun and a configuration change. For example, to run "my_mpi_program.x" as a 48-way job for 20 hours one would submit the job like:

 sqsub -q mpi --nompirun -o out.%J -e out.%J --mpp=1G -n 48 -r 20h mpirun -mca mpi_leave_pinned 0 ./my_mpi_program.x