From Documentation
Jump to: navigation, search
Target usage: CELL cluster
System information: see prickly system page in web portal
System status: see prickly status page
Real time system data: see Ganglia monitoring page
Full list of SHARCNET systems

Pickly was decommissioned as of June 2012.

General information

This is a heterogeneous cluster comprised of 4 x86_64 multicore nodes and 8 Cell multicore nodes, connected via gigabit ethernet. The x86_64 nodes are dual-socket quad-core Xeon CPUs @ 2.50 GHz with 8GB RAM. The Cell nodes are dual-socket PowerXCell 8i CPUs @ 3.2 GHz with 16GB RAM.

NOTE: there is no automated job scheduler on prickly - users must email to request a reservation for nodes.

logging in

For example (you'd want to be assigned the particular node to login to by contacting, to login to pri05 remotely:

[user1@localhost]$ ssh
[user1@prickly ~]$ ssh pri05
[user1@pri05 ~]$

At this point you should check to see who else is on the system, and whether or not anything is running, ie.

merz@pri05 ~]$ who
merz     pts/0        2009-03-26 09:59 (prickly.prickly.sharcnet)
^^^ only I'm logged in

[merz@pri05 ~]$ uptime
10:00:18 up 62 days, 17:37,  1 user,  load average: 0.00, 0.00, 0.00
^^^ load = 0.00 means no jobs are running

You can also use the 'top' command to see what is running.

Cell SDK

The Cell SDK can be found on the Cell nodes at:


Users can copy it into their work directory and then proceed with compiling and running the examples and demos.

IBM XL Compilers

The IBM XL compilers should be on your path if you're logged into one of the Cell nodes. It is recommended that users use this compiler, eg.

ppuxlC        ppuxlc++      ppuxlc_r      ppuxlf        ppuxlf2003_r  ppuxlf90_r    ppuxlf95_r    
ppuxlc        ppuxlC_r      ppuxlc++_r    ppuxlf2003    ppuxlf90      ppuxlf95      ppuxlf_r      
spuxlC        spuxlc        spuxlc++      spuxlf        spuxlf2003    spuxlf90      spuxlf95  

One can find the compiler documentation on the Cell nodes at:

 Fortran:  /opt/ibmcmp/xlf/cbe/11.1/README    
 C/C++  :  /opt/ibmcmp/xlc/cbe/10.1/doc/en_US/pdf/ 

Open MPI 1.3.1 for QS22

Users may wish to use an MPI compiled with the XL compilers, if so they should add the following to their shell environment (ie. the end of their ~/.bashrc file):

#host specific configs
case `echo $CLUSTER` in
  export PATH="/work/merz/lib64/openmpi-1.3.1-xl/bin/:$PATH"

The above can be extended to configure your environment for other specialized SHARCNET clusters.

At this point, users will want to consult the Open MPI FAQ for further details about how to run MPI jobs.