From Documentation
Jump to: navigation, search
(Getting an Account with SHARCNET and Related Issues: Adding new, explicit instructions for the transitional linking process to the KB)
(I have an existing SHARCNET account and need to link it to a new Compute Canada account, how do I do that?)
Line 78: Line 78:
#submit a [ Compute Canada Account Application]
#submit a [ Compute Canada Account Application]
#*creating this account will also create a CCRI
#*creating this account will also create a CCRI
#*if you have a sponsor, you will need to ask them for their CCRI in order to complete this step
#confirm your email by clicking on the link in the email message you receive
#confirm your email by clicking on the link in the email message you receive
#**you may have to check your spam folder
#*you may have to check your spam folder
#wait for your sponsor or siteleader to approve your account
#wait for your sponsor or siteleader to approve your account
#*when approved, you'll receive email indicating your account is active
#*when approved, you'll receive email indicating your account is active

Revision as of 17:08, 14 April 2010

Sharcnet logo.jpg
Knowledge Base / Expanded FAQ




SHARCNET stands for Shared Hierarchical Academic Research Computing Network. Established in 2000, SHARCNET is the largest high performance computing consortium in Canada, involving seventeen universities and colleges across southern, central and northern Ontario.

SHARCNET also refers to a grid of high performance clusters of thousands of processors on a dedicated, private high speed wide area network with a throughput of 1 Gigabits per second. Powered by the Ontario Research Innovation Optical Network (ORION) and the state-of-the-art operating system environments, the grid of SHARCNET enables researchers to run a single parallel application across multiple clusters deployed at different institutions seamlessly.

SHARCNET is a member consortium in the Compute/Calcul Canada national HPC platform.

Where is SHARCNET?

The main office of SHARCNET is located in the Western Science Centre at The University of Western Ontario. The SHARCNET high performance clusters are installed at each institution of the consortium and operated by SHARCNET staff across different sites.

What does SHARCNET have?

The infrastructure of SHARCNET consists of a group of 64-bit high performance Itanium2, Xeon and Opteron clusters along with a group of storage units deployed at a number of universities and colleges. Those high performance clusters are interconnected with each other through the Ontario Research Innovation Optical Network (ORION) with a private, dedicated connection currently running at 1 Gigabits per second. SHARCNET clusters run the Linux operating system.

What can I do with SHARCNET?

If you have a program that takes months to run on your PC, you could probably run it within a few hours using hundreds of processors on the SHARCNET grid, provided your program is inherently parallelisable. If you have hundreds or thousands of test cases to run through on your PC or computers in your lab, then with hundreds of processors running those cases independently will significantly reduce your test cycles .

If you have used beowulf clusters made of commodity PCs, you may notice a performance improvement on SHARCNET clusters which have high-speed Quadrics, Myrinet and Infiniband interconnects, as well as SHARCNET machines which have large amounts of memory. Also, SHARCNET clusters themselves are connected through a dedicated, private connection over the Ontario Research Innovation Optical Network (ORION).

If you have access to other super computing facilities at other places and you wish to share your ideas with us and SHARCNET users, please contact us. Together we can make SHARCNET better.

Who is running SHARCNET?

The daily operation and development of SHARCNET computational facilities is managed by a group of highly qualified system administrators. In addition, we have a team of high performance technical computing consultants, who are responsible for technical support on libraries, programming and application analysis.

How do I contact SHARCNET?

For technical inquiries, you may send E-mail to, or contact your local system administrator or HPC specialist. For general inquiries, you may contact the SHARCNET main office.

My application runs on Windows, can I run it on SHARCNET?

It depends. If your application is written in a high level language such as C, C++ and Fortran and is system independent (meaning it does not depend on any particular third party libraries that are available only for Windows), then you should be able to recompile and run your application on SHARCNET systems. However, if your application completely depends upon a special software for Windows, then you are out of luck. In general it is impossible to convert code at binary level between Windows and any of UNIX platforms.

My application runs on Windows HPC clusters, can I run it on SHARCNET clusters?

If your application does not use any Windows specific APIs then it should be able to recompile and run on SHARCNET UNIX/Linux based clusters.

Getting an Account with SHARCNET and Related Issues

What is required to obtain a SHARCNET account

Anyone who would like to use SHARCNET may apply for an account.

  • There is no fee for academic access, but users are responsible for providing SHARCNET with citations for publications aided by SHARCNET resources.
  • There are no shared/group accounts, each person who uses SHARCNET requires their own account and must not share their password.
  • All SHARCNET users must read and follow the policies listed here
  • All academic SHARCNET users must obtain a Compute Canada identifier and link it to their SHARCNET account.

How do I apply for an account?

Please do not send E-mail or call to request for an account. If you are having trouble with the following instructions please contact

The procedure for acquiring an account depends on whether or not your position is in Canada, and whether or not it is an academic position.

  1. faculty and students at Canadian institutions can apply for their SHARCNET account as follows:
    1. acquire a Compute Canada Identifier
    2. login to the Compute Canada Database
      • navigate to the Consortium Accounts Page
      • click the Apply button next to the word SHARCNET
      • a this point you should follow the instructions on the SHARCNET website to complete the application
  2. international academic users who do not have a Compute Canada Identifier should apply for an account via the SHARCNET new user application form
    • for non-faculty accounts you need to know the SHARCNET username of your sponsor when applying
    • applicants must have an institutional email account that we can use for contact purposes
      • commercial email accounts such as Yahoo, Gmail or Hotmail are not acceptable
    • please note that if the browser you are using already thinks it is logged in, you will need to logout in order for this link to work correctly. Otherwise you will be taken to the "My Account" page of the logged in user
    • your application will be processed within 24 hours and you will receive an E-mail notification of whether your application is approved or declined
  3. all other account requests (commercial access, non-academic, etc.) should use the new user application form and are approved following consultation with the Scientific Director. If you are working outside of academia we recommend you read our Commercial Access Policy which can be found in the SHARCNET web portal here.

I have an existing SHARCNET account and need to link it to a new Compute Canada account, how do I do that?

You first need to get a Compute Canada Role Identifier (CCRI) and then you may link it to your existing SHARCNET account. The process is as follows:

  1. submit a Compute Canada Account Application
    • creating this account will also create a CCRI
    • if you have a sponsor, you will need to ask them for their CCRI in order to complete this step
  2. confirm your email by clicking on the link in the email message you receive
    • you may have to check your spam folder
  3. wait for your sponsor or siteleader to approve your account
    • when approved, you'll receive email indicating your account is active
    • it will also contain your new CCRI
  4. login to the CC website (using the email+password from step 1 above) and go to the Consortium Accounts Page
    • select the "Link Account" button to the right of "SHARCNET"
  5. this will take you to a sharcnet webpage
    • you may need to login using your sharcnet username/password
    • push the "Link my Accounts!" button on that page

How do I get a Compute Canada Identifier (CCI)?

Please visit the Compute Canada FAQ for further information on what a CCI is, and how to acquire one.

How do I link my SHARCNET account to my Compute Canada Identifier?

If you have an existing SHARCNET account and need to link it with your Compute Canada Identifier (CCI) please login to the CCDB Compute Canada Portal and navigate to the Consortium Accounts Page and click the "Link Account" button next to the word SHARCNET.

Follow any further instructions as necessary. Please send any questions you may have about this process to

What is a role?

Each person may have one or more roles that are associated with each of their current and past positions. These various roles ultimately link back to ones Compute Canada Identifier. In the web portal you may see different information depending on which role you have selected to be active. For further information about roles please see the SHARCNET-specific role information here and the more general Compute Canada specific information here.

Can I just have a cluster account without having a web portal account?

No. The web portal account is a web interface to your account in our user database. It also provides you a way of managing your information and keeping track of problems you have, which will be useful for troubleshooting if you encounter the same type of problem.

Can I E-mail or call to open an account?

No, please follow the instructions above.

OK, I've seen and heard the word "web portal" enough, what is it anyway?

A web portal is a web site that offers online services. Usually a web portal has a database at the backend, in which people can store and access personal information, but may involve other software services like this wiki. At SHARCNET, registered users can login to the web portal, manage their profiles, submit and review programming and performance related problems, look-up solutions to problems, contribute to our wiki, and assess their SHARCNET usage, amongst other things.

In the account application form, what should I fill in the "sponsor" field?

If you are a faculty member, then leave the field "sponsor" blank. Otherwise, the sponsor field is the SHARCNET username of your supervisor or collaborator (if you are visiting scholar for example, use the name of the person who invited you).

My supervisor does not have an account, so my application can't go through, what should I do?

If your supervisor does not have an account yet, please ask your supervisor to apply for an account first.

My supervisor forgot all about his/her username, so my application can't go through, what should I do?

Please have them send an E-mail to and we will re-inform them of their login credentials.

My supervisor does not use SHARCNET, why is my supervisor asked to have an account anyway?

Your supervisor's account ID is used to identify which group your account belongs to. We account for all usage at the group level.

I am a visiting scholar, in the application for an account, what should I fill in the field "sponsor" ?

You should fill in the user name of the person who invites you.

I am changing supervisor or I am becoming faculty, and I already have a SHARCNET account. Should I apply for a new account?

No. Send all of the details to, and we will update your account.

Is there any charge for using SHARCNET?

SHARCNET is free for all academic research. If you are working outside of academia we recommend you read our Commercial Access Policy which can be found in the SHARCNET web portal here.

I forgot my password

You can reset your password here, or by clicking the "Forget password" link after trying to sign-in.

I forgot my username

If you forget your username, please send an E-mail to Your username for the web portal and cluster account are the same.

My account has been disabled (so i cannot login). What should I do ?

Typically your account expiry date was not renewed by your sponsor before the "Account expiration" date as shown on your profile. To fix this, ask your sponsor to visit sponsored and click the "enable" radio button in the row corresponding to your user entry, then click the "Save" button below the user list. Note that when your acccount is disabled you will not be able to log into any SHARCNET cluster or the SHARCNET web portal!

I no longer want my SHARCNET account

If you would like to cease using SHARCNET (including access to all systems and list email), disabling your account depends on the type of account you have:

  1. For Primary Investigator accounts (sponsors, eg. faculty):
    • email
  2. For all sponsored accounts:

By default all unused accounts will expire every summer if they are not re-enabled by your supervisor (or the site leader, for PI accounts), so you should only request this if you want your account disabled *now*. You are free to have your account re-enabled at any time in the future by emailing

Logging in to Systems, Transferring and Editing Files

How do I login to SHARCNET?

There is no single point of entry at present. "Logging in to SHARCNET" means you login to one of the SHARCNET systems. A complete list of SHARCNET systems can be found on our facilities page.

To login to a system, you need to use an Secure Shell (SSH) connection. If you are logging in from a UNIX-based machine, make sure it has an SSH client (ssh) installed (this is almost always the case on UNIX/Linux/OS X). If you have the same login name on both your local system and SHARCNET, and you want to login to, say, bull, you may use the command:


If your SHARCNET username is different from the username on your local systems, then you may use either of the following commands

ssh -l username

If you want to eastablish an X window connection so that you can use graphics applications such as gvim and xemacs, you can add an option -Y and the end of the command, e.g.

ssh -l username -Y

There is no need to set X display on the host you login to.

If you are logging from a computer running Windows and need some pointers we recommend consulting our ssh for Windows Users tutorial.

How can I suspend and resume my session?

The program screen can start persistent terminals from which you can detach and reattach. The simplest use of screen is

screen -R

which will either reattach you to any existing session or create a new one if one doesn't exist. To terminate the current screen session, type exit. To detach manually (you are automatically detached if the connection is lost) press ctrl+a followed by d, you can the resume later as above. Note that ctrl+a is screen's escape sequence, so you have to do ctrl+a followed by a to get the regular effect of pressing ctrl+a inside a screen session (e.g., moving the cursor to the start of the line in a shell).

For a list of other ctrl+a key sequences, press ctrl+a followed by ?. For further details and command line options, see the screen manual (or type man screen on any of the clusters).

How can I access to SHARCNET machines from Windows PC?

Please see our Ssh for Windows Users tutorial.

What operating systems are supported?

UNIX in general. Currently, Linux is the only operating system used within SHARCNET.

What makes a cluster different than my UNIX workstation?

If you are familiar with UNIX, then using a cluster is not much different from using a workstation. When you login to a cluster, you in fact only log in to one of the cluster nodes. In most cases, each cluster node is a physical machine, usually a server class machine, with one or several CPUs, that is more or less the same as a workstation you are familiar with. The difference is that these nodes are interconnected with special interconnect devices and the way you run your program is slightly different. Across SHARCNET clusters, you are not expected to run your program interactively. You will have to run your program through a queueing system. That also means where and when your program gets to run is not decided by you, but by the queueing system.

Which cluster should I use?

Each of our clusters is designed for a particular type of job. Our cluster map shows which systems are suitable for various job types.

What programming languages are supported?

Those primary programming languages such as C, C++ and Fortran are supported. Other languages, such as Java, Pascal and Ada, are also supported, but with limited technical support from us. That means, if your program is written in any language other than C, C++ and Fortran, and you encounter a problem, we may or may not be able solve it within a short period of time.

How do I organize my files?

Our experience is that when large amounts of storage are available, it is too easy to lose track of files, let stale copies accumulate, etc. The number of files that one can truly manage is also fairly modest and does not scale over time, or with availability of storage. For these reasons, SHARCNET provides the following pools of storage:

place quota expiry access purpose
/home 200 MB none unified sources, small config files
/work 200 GB none per-cluster active data files
/scratch none 37 days per-cluster temporary files, checkpoints
/tmp none 2 days per-node node-local scratch
archive none none unified command-access long term data archive

These distinctions reflect the fact that different kinds of files have very different properties, so are best implemented using different file systems, servers, RAID levels and backup policies.

Backups are in place for your home directory only. Scratch and work are not backed up. In general we store one version of each file for the previous 5 working days, one for each of the 4 previous weeks, and one version per month before that. Backups began in September 2006.

The access column represents our design for the new SHARCNET environment: it is not implemented on all clusters yet. /home is shown as unified - this means that when you login, regardless of cluster, you always see the same directory. since /home is remote on most clusters, it's important that you not have lots of jobs doing IO to it. That's what /work is for, and is why most clusters have their own /work directory.

/scratch has no quota limit - so you can put as much data in /scratch/<userid> as you want, until there is no more space. The important thing to note though, is that all files on /scratch that are over 37 days old will be automatically deleted.

Once a file is created in /scratch/<userid> reading it, renaming, changing the file's timestamps with 'touch', or copying it into another file are all irrelevant. The file will be expired 37 days after it was created.

Only files that have been modified (e.g. more information written to the file) will be safe from deletion.

If you'd like to reliably backup large volumes of data to archive storage, use the archive command rather than leaving it on the per-cluster /work filesystems. There have been instances where data was lost on /work and /scratch, so it is definitely a good idea to back up your data to archive if it is important. archive is provided by the "Archive Tools" with command-only access (ie. it is not possible for users to directly manipulate the filesystem). See the following FAQ entry for further details.

For users who want to learn more about optimizing I/O at SHARCNET please read Analyzing I/O Performance.

How are file permissions handled at SHARCNET?

By default, anyone in your group can read and access your files. You can provide access to any other users by following this Knowledge Base entry.

All SHARCNET users are associated with a primary GID (group id) belonging to the PI of the group (you can see this by running id username , with your username). This allows for groups to share files without any further action, as the default file permissions for all SHARCNET storage locations (Eg. /work/user ) allows read (list) and execute (enter / access) permissions for the group, eg. they appear as:

  [sn_user@req770 ~]$ ls -ld /work/sn_user
  drwxr-x---  5 sn_user sn_group 4096 Jan 25 22:01 /work/sn_user

Further, by default the umask value for all users is 0002, so any new files or directories will continue to provide access to the group.

Should you wish to keep your files private from all other users, you should set the permissions on the base directory to only be accessible to yourself. For example, if you don't want anyone to see files in your home directory, you'd run:

chmod 700 ~/

If you want to ensure that any new files or directories are created with different permissions, you can set your umask value. See the man page for further details by running:

man umask

For further information on UNIX-based file permissions please run:

man chmod

What about really large files?

If you need to work with really large files we have tips on optimizing performance with our parallel filesystems here.

How do I transfer files/directories to/from or between cluster?

To transfer files to and from a cluster on a UNIX machine, you may use scp or sftp. For example, if you want to upload file foo.f to cluster narwhal from your machine myhost, use the following command

myhost$ scp foo.f

assuming that your machine has scp installed. If you want to transfer a file from Windows or Mac, you need have scp or sftp for Windows or Mac installed.

If you transfer file foo.f between SHARCNET clusters, say from your home directory on narwhal to your scratch directory on requin, simply use the following command

[username@nar316 ~]$ scp foo.f requin:/scratch/username/

If you are transferring files between a UNIX machine and a cluster, you may use scp command with -r option. For instance, if you want to download the subdirectory foo in the directory project in your home directory on whale to your local UNIX machine, on your local machine, use command

myhost$ scp -rp .

Similarly, you can transfer the subdirectory between SHARCNET clusters. The following command

[username@nar316 ~]$ scp -rp requin:/scratch/username/foo .

will download subdirectory foo from your scratch directory on requin to your home directory on narwhal (note that the prompt indicates you are currently logged on to narwhal).

The use of -p option above will preserve the time stamp of each file. For Windows and Mac, you need to check the documentation of scp for features.

You may also tar and compress the entire directory and then use scp to save bandwidth. In the above example, first you login to narwhal, then do the following

[username@nar316 ~]$ cd project
[username@nar316 ~]$ tar -cvf foo.tar foo
[username@nar316 ~]$ gzip foo.tar

Then on your local machine myhost, use scp to copy the tar file

myhost$ scp .

Note for most Linux distributions, tar has an option -z that will compress the .tar file using gzip.

How do I access the same file from different subdirectories on the same cluster ?

You should not need copy large files on the same cluster (e.g. from one user to another or using the same file in different subdirectories). Instead of using scp you might consider issuing a "soft link" command. Assume that you need access to the file large_file1 in subdirectory /work/user1/subdir1 and you need it to be in your subdirectory /work/my_account/my_dir from where you will invoke it under the name my_large_file1. Then go to that directory and type:

ln -s /work/user1/subdir1/large_file1    my_large_file1

Another example, assume that in subdirectory /work/my_account/PROJ1 you have several subdirectories called CASE1, CASE2, ... In each subdirectory CASEn you have a slightly different code but all of them process the same data file called test_data. Rather than copying the test_data file into each CASEn subdirectory, place test_data above i.e. in /work/my_account/PROJ1 and then in each CASEn subdirectory issue following "soft link" command:

ln -s ../test_data  test_data

The "soft links" can be removed by using the rm command. For example, to remove the soft link from /work/my_account/PROJ1/CASE2 type following command from this subdirectory:

rm -rf test_data

Typing above command from subdirectory work/my_account/PROJ1 would remove the actual file and then none of the CASEn subdirectories would have access to it.

How to archive my data?

Please note: with the current version of the archive tools, it is strongly not advisable to archive large amount of data (>100GB) in one step, as one archive, because it will probably fail. It is better to split your data into chunks <100GB each, and archive them individually.

SHARCNET provides a small set of scripts (hereafter "Archive Tools") allowing users to move their data between clusters and our Archive (a long-term storage facility with 200 TB capacity). Please note that Archive is not directly mounted on any of our clusters, and can be accessed only through Archive Tools.

The only Archive Tools script directly used by users is archive. Executing this script without any parameters on any of our clusters will print a short description of the program:

[xxx@wha783 ~]$ archive
    archive [--get name-you-chose] [--put name-you-choose
    list-of-local-files-and-dirs] [--remove name-you-choose] [--list

    --put          store into an archive of the given name
                   (the name can include subdirectories).
    --get          retrieve files of the given name
                   (also works with subdirectories).
    --list         show your archives or directories (or specific one).
    --remove       remove an archive or an empty directory.

    -h or --help        show usage (default option).
    --man               show man page.

The Tools allow basic functionality, such as an ability to move a collection of local files and/or directories to Archive (option --put), to move the data back (--get), to list the user's archives (--list), and to delete an archive (--remove).

An example: you want to archive the contents of two directories (with all the subdirectories), DIR1 and DIR2, and two files, file1 and file2. First, come up with a good (descriptive) name for your archive - say, DIR1-2_file1-2 . Then, execute the following command:

archive --put DIR1-2.file1-2  DIR1 DIR2 file1 file2

It will create an (uncompressed) TAR file DIR1-2.file1-2 in Archive containing the data from DIR1, DIR2, file1, and file2. To better organize their archives, users can create directories and subdirectories in Archive during the execution of archive --put. For example,

archive --put Level1/Level2/name file1 file2

will first create nested directories Level1/Level2 in Archive, and then will copy file1 and file2 to the archive Level1/Level2/name. If the archive with such a name already exists in Archive, the command will fail.

TIP: It is strongly recommended to create a list of the files being archived during the execution of archive --put ..., by piping the standard output to a local file:

archive --put Level1/Level2/name file1 file2 >& name.list

This is especially important if you create large (many gigabytes) archives containing many files. Keep these list files in one location. This will significantly simplify the task of locating one particular file or directory in your archives.

The opposite command is --get. For the above examples, the files will be copied back from Archive after executing the following commands:

archive --get DIR1-2_file1-2
archive --get Level1/Level2/name

If the local directories or files with the same names already exist, they will not be overwritten, and the program will produce error messages.

Finally, individual files or empty directories can be removed from Archive by executing archive --remove. Each deletion has to be explicitly confirmed (by typing "yes"). For example, to delete Level1/Level2/name from Archive, one has to execute the following sequence of commands, confirming each deletion with "yes":

archive --remove Level1/Level2/name
archive --remove Level1/Level2
archive --remove Level1

How can I check the hidden files in directory?

The "." at the beginning of the name means that the file is "hidden". You have to use the -a option with ls to see it. I.e. 'ls -a'.

If you want to display only the hidden files then type:

ls -d .*

Note: there is an alias which is loaded from /etc/bashrc (see your .bashrc file). The alias is defined by alias l.='ls -d .* --color=tty' and if you type:


you will also display only the hidden files.

I am unable to connect to one of the clusters; when I try, I am told the connection was closed by the remote host

The most likely cause of this behaviour is repeated failed login attempts. Part of our security policies involves blocking the IP address of machines that attempt multiple logins with incorrect passwords over a short period of time---many brute-force attacks on systems do exactly this: looking for poor passwords, badly configured accounts, etc. Unfortunately, it isn't uncommon for a user to forget their password and make repeated login attempts with incorrect passwords and end up with that machine blacklisted and unable to connect at all.

A temporary solution is simply to attempt to login from another machine. If you have access to another machine at your site, you can shell to that machine first, and then shell to the SHARCNET system (as that machine's IP shouldn't be blacklisted). In order to have your machine unblocked, you will have to file a problem ticket as a system administrator must manually intervene in order to fix it.

NOTE: there are other situations that can produce this message, however they are rarer and more transient. If you are unable to log in from one machine, but can from another, it is most likely the IP blacklisting that is the problem and the above will provide a temporary work-around while your problem ticket is processed.

I am unable to ssh/scp from SHARCNET to my local computer

Most campus networks are behind some sort of firewall. If you can ssh out to SHARCNET, but cannot establish a connection in the other direction, then you are probably behind a firewall and should speak with your local system administrator or campus IT department to determine if there are any exceptions or workarounds in place.


Suppose you attempt to login to SHARCNET, but instead get an alarming message like this:

Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/hahn/.ssh/known_hosts:42
RSA host key for requin has changed and you have requested strict checking.
Host key verification failed. 

SSH normally tries to verify that the host you're connecting to is authentic. It does this by caching the hosts's "hostkey" in your ~/.ssh/known_hosts file. At times, it may be necessary to legitimately change its hostkey; when this happens, you may see such a message. It's a good idea to verify this with us, you may be able to check the fingerprint yourself by logging into another sharcnet system and running:

ssh-keygen -l -f /etc/ssh/ 

If the fingerprint is OK, the normal way to fix the problem is to simply remove the old hostkey from your known_hosts file. You can use your choice of editor if you're comfortable doing so (it's a plain text file, but has long lines). On a unix-compatible machine, you can also use the following very small script (Substitute the line(s) printed in the warning message illustrated above for '42' here.):

perl -pi -e 'undef $_ if (++$line == 42)' ~/.ssh/known_hosts

Another solution is brute-force: remove the whole known_hosts file. This throws away any authentication checking, and your first subsequent connection to any machine will prompt you to accept a newly discovered host key. If you find this prompt annoying and you aren't concerned about security, you can avoid it by adding a text file named ~/.ssh/config on your machine with the following content:

StrictHostKeyChecking no

Ssh works, but scp doesn't!

If you can ssh to a cluster successfully, but cannot scp to to it, the problem is likely that your login scripts print unexpected messages which confuse scp. scp is based on the same ssh protocol, but assumes that the connection is "clean": that is, that it does not produce any un-asked-for content. If you have something like:

echo "Hello, Master; I await your command..."

scp will be confused by the salutation. To avoid this, simply ensure that the message is only printed on an interactive login:

if [ -t 0 ]; then
    echo "Hello, Master; I await your command..."

or in csh/tcsh syntax:

if ( -t 0 ) then
    echo "Hello, Master; I await your command..."

How do I edit my program on a cluster?

We provide a variety of editors, such as the traditional text-mode emacs and vi (vim), as well as a simpler one called nano. If you have X on your desktop (and properly tunneled through SSH), you can use the GUI versions (xemacs, gvim).

Compiling and Running Programs

How do I compile my programs

To make it easier to compile across all SHARCNET clusters, we provide a generic set of commands:

cc, c++, f77, f90, f95

and for MPI,

mpicc, mpic++, mpiCC, mpif77, mpif90, mpif95

These commands provide several benefits:

  • they select optimization appropriate to the cluster's CPUs.
  • using -lmpi or the mpi-prefixed commands will select the necessary cluster-specific options for MPI.
  • using -llapack link with the vendor-tuned LAPACK library.
  • using -openmp will direct the compiler to use OpenMP.

However, if you need to use blas or lapack routines you should consider using the ACML libraries on Opteron for pathscale compilers and MKL on intel hardware for intel compilers. ACML and MKL are vendor optimized libraries including blas and lapack routines. Refer to the software pages for examples: (for ACML) (for MKL)

Here are some basic examples:

cc foo.c -o foo
cc -openmp foo.c -llapack -o foo
f90 *.f90 -lmpi -o my_mpi_prog
mpif90 *.f90 -o my_mpi_prog
f90 -mpi -c a.f90; mpif90 -c b.f90; compile a.o b.o -lmpi -o my_mpi_prog

In the first example, the preferred compiler and optimization flags will be selected, but not much else happens. In the second case, the underlying compiler's OpenMP flag (which differs among compilers) is selected, as well as linking with a system-tuned LAPACK/Blas library. in the third example, an MPI program written in Fortran90 is compiled and linked with whatever cluster-specific MPI libraries are required. The fourth example is identical except that the mpi-prefixed command is used. In the fifth example, two files are separately compiled, then linked with MPI stuff; the point is simply that even for non-linking, you need to declare that you're using MPI by either an mpi-prefixed command or -mpi or -lmpi.

These command commands will invoke the underlying compilers such as Intel or PathScale compilers, whichever are available to the system you are using. For specific compiler options, please refer to the man pages.

You aren't required to use these commands, and may not want to if you have pre-existing Makefiles, for instance. You can always add -v to see what full commands are being generated. Here's a brief summary of compilers available on various systems:

system compilers
Opteron systems (requin, narwhal, whale, bull) pathcc pgcc gcc
Xeon systems (saw, mako) icc ifc ifort gcc
Itanium2 systems (silky) icc ifc ifort

On Intel Itanium 2 clusters in particular, you should always use the high performance Intel compilers icc and ifort for C/C++ and Fortran code respectively, if available. They give much better performance than the generic GNU compilers on this chip.

Relocation overflow and/or truncated to fit errors

If you get "relocation overflow" and/or "relocation truncated to fit" errors when you compile big fortran 77 codes using pathf90 and/or ifort, then you should try the following:

(A) If the static data structures in your fortran 77 program are greater than 2GB you should try specifying the option -mcmodel=medium in your pathf90 or ifort command.

(B) Try running the code on a different system which has more memory:

   Other clusters that you can try are: requin, hound or bull

You would probably benefit from looking at the listing of all of the clusters:

and this page has a table showing how busy each one is:

How do I run a program?

In general, users are expected to run their jobs in "batch mode". That is, one submits a job -- the application problem -- to a queue through a batch queue command, the scheduler schedules the job to run at a later time and sends the results back once the program is finished.

In particular, one will use SQ command (see What is the batch job scheduling environment SQ? below) to launch a serial job foo

sqsub -o foo.log -r 5h ./foo

This means to submit the command foo as a job with a 5 hour runtime limit and put its standard output into a file foo.log (note that it is important to not put too tight of a runtime limit on your job as it may sometimes run slower than expected due to interference from other jobs).

If your program takes command line arguments, place the arguments after your program name just as when you run the program interactively

sqsub -o foo.log -r 5h ./foo arg1 arg2...

For example, suppose your program takes command line options -i input and -o output for input and output files respectively, they will be treated as the arguments of your program, not the options of sqsub, as long as they appear after your program in your sqsub command

sqsub -o foo.log -r 5h ./foo -i input.dat -o output.dat

To launch a parallel job foo_p

sqsub -q mpi -n num_cpus -o foo_p.log -r 5h ./foo_p

The basic queues on SHARCNET are:

queue usage
serial for serial jobs
mpi for parallel jobs using the MPI library
threaded for threaded jobs using OpenMP or POSIX threads

To see the status of submitted jobs, use command sqjobs.

What about running a program compiled on one cluster on another?

In general, if your program starts executing on a system other than the one it was compiled on, then there are likely no issues. However, you may want to compare results of test jobs just to make sure. The specific list of things to watch out for are

  1. using a particular compiler and/or optimizations,
  2. using a particular library (most frequently a specific MPI implementation), and
  3. using the /home filesystem because it is global.

In general, as long as very specific architecture optimizations are not being used (e.g., -march=native), you should be able to compile a program on one SHARCNET system and run it on others as most systems are binary compatible and the compiler runtime libraries are installed everywhere. In particular, this is true for our larger core systems and should be true for our other specialized systems as well---all of them except the Itanium systems (silky and bramble) are x86 based. It is worth noting that some compilers produce faster code on particular processors, and some compiler optimizations may not not work on all systems, so you may want to recompile in order to get the best performance. We actually have different default compilers on different systems (Intel on saw, Pathscale on the opteron systems). It is probably worth doing some comparisons.

With regard to MPI, and other libraries, you have to be a little more careful. All of the core systems have most of the same libraries and use HP-MPI by default, so programs should be able to run between each system without any modification (at the end of the day as long as the runtime libraries and the necessary dependencies are installed you shouldn't have any problems). However, some of the specialty systems use OpenMPI instead of HP-MPI and have different libraries, so you will have to recompile your program to use them there.

Another thing to watch out for is using /home because it is global. Because /home is global, it is slow and is not intended to be used as a working directory for running jobs. If your program writes to the local /work and /scratch filesystems on the compute clusters, and you submit the job from /work or /scratch (so that the stdout gets written there), then running the executable from /home should be fine. However, if it is ran from and/or writes to /home, then it will suffer a severe performance penalty. It's probably easiest to set up your working directory in /work and then just symlink to your binary in /home.

My program needs to run for more than seven (7) days but user certification limits caps me at seven days of run-time; what can I do?

All SHARCNET queues are globally capped at seven (7) days of run-time and there is no means by which to exceed it. Note that although there is a higher level of user certification from the default (User1), this only affects how many processors you can consume simultaneously; the seven day run-time limit cannot be exceeded through higher levels of certification. In order to run a program that requires more than this amount of wall-clock time, you will have to make use of a checkpoint/restart mechanism so that the program can periodically save its state and be resubmitted to the queues, picking up from where it left off.

How do I checkpoint/restart my program?

Checkpointing is a valuable strategy that minimizes any waste of time and resources associated with program or node failure during a long-running job, and is effectively required for any program that requires more than seven (7) days of run-time as the all SHARCNET queues are globally capped at that duration. Assuming it is a serial or multi-threaded program (i.e. *not* MPI), you can make use of Berkeley Labs Checkpoint Restart software that is provided on the clusters. Documentation and usage instructions can be found on SHARCNET's BLCR software page. Note that BLCR requires your program to not be statically compiled (i.e., to use shared libraries).

If the program is MPI (or any other type of program requiring a specialized job starter to get it running), the program will have to be able to save state and restart from that state on its own. Please check the documentation that accompanies any software you are using to see what support it has for checkpointing. If the code has been written from scratch, you will need to build checkpointing functionality into it yourself---output all relevant parameters and state such that the program can be subsequently restarted, reading in those saved values and picking up where it left off.

How do I run a program remotely?

It is also possible to specify a command to run on the end of a ssh command. A command like ssh sqjobs, however, will not work because ssh does not setup a full environment by default. In order to get the same environment you get as when you login, it is necessary to run the command under bash in login mode.

myhost$ ssh bash -l -c sqjobs

If you wish to specify a command longer than a single word, it is necessary to quote it as the bash -c only takes a single argument. In order to pass these quotes through to ssh, however, it is necessary to escape them. Otherwise the local shell will interpret them and strip them off. An example is

myhost$ ssh bash -l -c \' sqsub -r 5h ./myjob \'

Most problems with these commands are related to the local shell interpreting things that you wish to pass through to the remote side (e.g., stripping out any unescaped quotes). Use -v with ssh and set -x with bash to see what command(s) ssh and bash are executing respectively.

myhost$ ssh -v bash -l -c \' sqsub -r 5h ./myjob \'
myhost$ ssh bash -l -c \' set -x\; sqsub -r 5h ./myjob \'

Is package X preinstalled on system Y, and, if so, how do I run it?

The list of packages that SHARCNET has preinstalled on the various clusters, along with instructions on how to use them, can be found on the SHARCNET software page.

What is the batch job scheduling environment SQ?

SQ is a unified frontend for running jobs on SHARCNET, intended to hide unnecessary differences in how the clusters are configured. On clusters which are based on RMS, LSF+RMS, or Torque+Maui, SQ is just a thin shell of scripting over the native commands. On Wobbie, the native queuing system is called SQ.

To run a job, you use sqrun:

sqrun -n 16 -q mpi -r 5h ./foo

This runs foo as an MPI command on 16 processors with a 5 hour runtime limit (make sure to be somewhat conservative with the runtime limit as a job may run for longer than expected due to interference from other jobs). You can control input, output and error output using these flags:

sqrun -o outfile -i infile -e errfile -r 5h ./foo

this will run foo with its input coming from a file named infile, its standard output going to a file named outfile, and its error output going to a file named errfile. Note that using these flags is preferred over shell redirection, since the flags permit your program to do IO directly to the file, rather than having the IO transported over sockets, then to a file.

Often, especially with IO redirection as above, it is convenient to submit a job, and not wait for it to run. To do this, simply add a --bg switch to sqrun, or equivalently use sqsub. It makes no difference to the scheduler whether you run (wait to complete) or submit (batch mode).

For threaded applications (which use Pthreads, OpenMP, or fork-based parallelism), do this:

sqsub -q threaded -n 2 -r 5h ./foo

Serial jobs require no flags beyond the runtime

sqrun -r 5h ./foo

but you can provide IO redirection flags if you wish.

How do I show and control jobs under SQ?

To show your jobs, use "sqjobs". by default, it will show you only your own jobs. with "-a" or "-u all", it will show all users. similarly, "-u someuser" will show jobs only for this particular user.

To kill, suspend or resume your jobs, use sqkill/suspend/resume with the job ID as shown by sqjobs.

Note also that providing the -v switch to sqrun/sqsub will print the jobid at submission time.

How do I translate my LSF command to SQ?

SQ very strongly resembles LSF commands such as bsub. For instance, here are two versions, the first assuming LSF, the second using SQ:

bsub -q mpi -n 16 -o term.out ./ParTISUN
sqsub -q mpi -n 16 -o term.out ./ParTISUN

There are some differences:

  • SQ doesn't have static queues like LSF. Instead, the "-q" simply describes the kind of job - MPI(parallel), threaded or serial.
  • SQ doesn't use the extra "prun" in there - it knows that parallel jobs always need the prun.
  • sqjobs is quite similar to bjobs.
  • sqkill/suspend/resume is quite similar to bkill/suspend/resume.

How can I submit jobs that will run where ever there are free cpus?

SHARCNET clusters differ in several ways: access to particular storage and cluster node properties. For instance, if you submit a job which refers to files in /work or /scratch, it may currently only run on that particular cluster. Similarly, a job may require, for instance, a very large amount of memory per processor, only available on Bull. But some jobs which do little IO, and which are serial and use modest amounts of memory may be run using the "global jobs" facility.

To submit a global job, just add --global to the sqsub command:

sqsub --global -o my.log ./program
sqjobs --global

Again, this currently only applies to jobs which can run in your /home tree (which is very limited in size and speed), and which are serial.

Command 'top' gives me two different memory size (virt, res). What is the difference between 'virtual' and 'real' memory?

'virt' refers to the total virtual address space of the process, including virtual space that has been allocated but never actually instantiated, including memory which was instantiated but has been swapped out, and memory which may be shared. 'res' is memory which is actually resident - that is, instantiated with real ram pages. resident memory is normally the more meaningful value, since it may be judged relative to the memory available on the node. (recognizing, of course, that the memory on a node must be divided among the resident pages for all the processes, so an individual thread must always strive to keep its working set a little smaller than the node's total memory divided by the number of processors.)

there are two cases where the virtual address space size is significant. one is when the process is thrashing - that is, has a working set size bigger than available memory. such a process will spend a lot of time in 'D' state, since it's waiting for pages to be swapped in or out. a node on which this is happening will have a substantial paging rate expressed in the 'si' column of output from vmstat (the 'so' column is normally less significant, since si/so do not necessarily balance.)

the second condition where virtual size matters is that the kernel does not implement RLIMIT_RSS, but does enforce RLIMIT_AS (virtual size). we intend to enforce a sanity-check RLIMIT_AS, and in some cases do. the goal is to avoid a node becoming unusable or crashing when a job uses too much memory. current settings are very conservative, though - 150% of physical memory.

in this particular case, the huge V size relative to R is almost certainly due to the way Silky implements MPI using shared memory. such memory is counted as part of every process involved, but obviously does not mean that N * 26.2 GB of ram is in use.

in this case, the real memory footprint of the MPI rank is 1.2 GB - if you ran the same code on another cluster which didn't have numalink shared memory, both resident and virtual sizes would be about that much. since most of our clusters have at least 2GB per core, this code could run comfortably on other clusters.

Can I use a script to compile and run programs?

Yes. For instance, suppose you have a number of source files main.f, sub1.f, sub2.f, ..., subN.f, to compile these source code to generate an executable myprog, it's likely that you will type the following command

f77 -o myprog main.f sub1.f sub2.f ... sub N.f -llapack

Here, the -o option specifies the executable name myprog rather than the default a.out and the option -llapack at the end tells the compiler to link your program against the LAPACK library, if LAPACK routines are called in your program. If you have long list of files, typing the above command every time can be really annoying. You can instead put the command in a file, say, mycomp, then make mycomp executable by typing the following command

chmod +x mycomp

Then you can just type


at the command line to compile your program.

This is the simplest way to deal with multiple source files. However, this is not the efficient way. The most efficient way to compile multiple files and use different libraries is to use make.

I get errors trying to redirect input into my program when submitted to the queues, but it runs fine if run interactively

The standard method to attach a file as the input to a program when submitting to SHARCNET queues is to use the -i flag to sqsub, e.g.:

sqsub -q serial -i inputfile.txt ...

Occasionally you will encounter a situation where this approach appears not to work, and your program fails to run successfully (reasons for which can be very subtle). Here is an example of one such message that was being generated by a FORTRAN program:

lib-4001 : UNRECOVERABLE library error 
    A READ operation tried to read past the end-of-file.

Encountered during a list-directed READ from unit 5 
Fortran unit 5 is connected to a sequential formatted text file 
    (standard input). 
/opt/sharcnet/sharcnet-lsf/bin/ line 75: 25730 Aborted (core dumped) "$@"

yet if run on the command line, using standard shell redirection, it works fine, e.g.:

program < inputfile.txt

Rather than struggle with this issue, there is an easy workaround: instead of submitting the program directly, submit a script that takes the name of the file for input redirection as an argument, and have that script launch your program making use of shell redirection. This circumvents whatever issue the scheduler is having by not having to do the redirection of the input via the submission command. The following shell script will do this (you can copy this directly into a text file and save it to disk; the name of the file is arbitrary but we'll assume it to be

Bash Shell script:
if (( $# != 1 )); then
        echo "ERROR: incorrect invocation of script"
        echo "usage: ./ <input_file>"
        exit 1
./${EXENAME} < ${1}

Note that you must edit the EXENAME variable to reference the name of the actual executable, and can be easily modified to take or provide additional arguments to the program being executed as desired. Ensure the script is executable by running chmod +x You can now submit the job by submitting the *script*, with a single argument being the file to be used as input, i.e:

sqsub -q serial -r 5h -o outputfile.log ./ intputfile.txt

This will result in the job being run on a compute node as if you had typed:

./program < inputfile.txt

NOTE: this workaround, as provided, will only work for serial programs, but can be modified to work with MPI jobs by further leveraging the --nompirun option to the scheduler, and launching the parallel job within the script using mpirun directly. This is explained below.

How do I submit an MPI job such that it doesn't automatically execute mpirun?

This can be done by using the --nompirun flag when submitting your job with sqsub. By default, MPI jobs submitted via sqsub are expected to be MPI binary programs, and the system automatically starts mpirun and launches the program for you. While this is convenient in most cases, some users may want to implement pre or post processing for their jobs, in which case they should encapsulate their MPI job in a shell script.

To use --nompirun one has to be aware of the underlying MPI implementation and how it should be used to execute mpirun. For XC systems, this is discussed in some detail in ticket 4677 and for CentOS systems another example is illustrated here using OpenMPI.

The basic idea is that you'd write a shell script (eg. named mpi_job_wrapper.x) to do some actions surrounding your actual MPI job (using requin as an example here):

echo "hello this could be any pre-processing commands"
/opt/hpmpi/bin/mpirun -srun ./mpi_job.x
echo "hello this could be any post-processing commands"

You would then make this script executable with:

chmod +x mpi_job_wrapper.x

and submit this to run on 4 cpus for 7 days with job output sent to ./wrapper_job.out:

sqsub -r 7d -q mpi -n 4 --nompirun -o ./wrapper_job.out ./mpi_job_wrapper.x

now you would see the following output in ./wrapper_job.out:

hello this could be any pre-processing commands
<any output from the MPI job>
hello this could be any post-processing commands

If you are having issues with using --nompirun we recommend that you submit a problem ticket so that staff can help you figure out how it should be utilized on the particular system you are using.

I have a program that runs on my workstation, how can I have it run in parallel?

If the the program was written without parallelism in mind, then there is very little that you can do to run it automatically in parallel. Some compilers are able to translate some serial portion of a program , such as loops, into equivalent parallel code, which allows you to explore the potential architecture found mostly in symmetric multiprocessing (SMP) systems. Also, some libraries are able to use parallelism internally, without any change in the user's program. For this to work, your program needs to spend most of its time in the library, of course - the parallel library doesn't speed up your program itself. Examples of this include threaded linear algebra and FFT libraries.

However, to gain the true parallelism and scalability, you will need to either rewrite the code using the message passing interface (MPI) library or annotate your program using OpenMP directives. We will be happy to help you parallelize your code if you wish. (Note that OpenMP is inherently limited by the size of a single node or SMP machine - most SHARCNET resources

Also, the preceding answer pertains only to the idea of running a single program faster using parallelism. Often, you might want to run many different configurations of your program, differing only in a set of input parameters. This is common when doing Monte Carlo simulation, for instance. It's usually best to start out doing this as a series of independent serial jobs. It is possible to implement this kind of loosely-coupled parallelism using MPI, but often less efficient and more difficult.

How can I have a quick test run of my program?

Sometimes you may experience long waiting time before your program in the queue starts running. To allow users to test their programs, a "test queue" is provided, which enables users to launch their programs quickly.

To have a test run, use sqsub option --test. For example, if you have an MPI program mytest that uses 8 processors, you may use the following command

sqsub --test -q mpi -n 8 -o mytest.log ./mytest

The only difference here is the "--test". The scheduler will normally start such test jobs within a few seconds.

The main purpose of the test queue is quickly test the startup of a changed job - just to verify that for a real, production run, it won't hit a bug shortly after starting (for instance, due to missing parameters.)

The "test queue" only allows one to run test program for a very short period of time (currently the limit on most systems is 1 hour), therefore you must make sure that your test run will not take longer than this to finish. In addition, the system monitors the user submissions and decreases the priority of submitted jobs over time within an internally defined time window. Hence if you keep submitting jobs as test runs, the waiting time before those jobs get started will be getting longer, or you will not be able to submit test jobs any more.

Which system should I choose?

There are many clusters, many of them specialized in some way. We provide an interactive map of SHARCNET systems on the web portal which visually presents a variety of criteria as a decision making aid. In brief however, depending on the nature of your jobs, there may be a clear preference for which cluster is most appropriate:

is your job serial?
Whale is probably the right choice, since it has a very large number of processors, and consequently has high throughput. Your job will probably run soonest if you submit it here.
do you use a lot of memory?
Bull or Hound is probably the right choice.
does your MPI program utilize a lot of communication?
Requin and Bull have the fastest networks, but it's worth trying Narwhal or Saw if you aren't familiar with the specific differences between Quadrics, Myrinet and Infiniband.
does your job (or set of jobs) do a lot of disk IO?
you probably want to stick to one of the major clusters (Bull/Narwhal/Requin/Saw/Whale) which have bigger and much faster (parallel) filesystems.

Where can I find available resources?

The information about available computational resources are available to the public on SHARCNET web at: our systems page and our cluster performance page.

The change of status of each system, such as down time, power outage, etc is announced through the following three different channels:

  • Web links under systems. You need to check the web site from time to time in order to catch such public announcements.
  • System notice mailing list. This is the passive way of being informed. You receive the notices in e-mail as soon as they are announced. But some people might feel it is annoying to be informed. Also, such notices may be buried in dozens or hundreds of other e-mail messages in your mail box, hence are easily ignored.
  • SHARCNET RSS broadcasting. A good analogy of RSS is like traffic information on the radio. When you are on a road trip and you want to know what the traffic conditions are ahead, you turn on the car radio, tune-in to a traffic news station and listen to updates periodically. Similarly, if you want to know the status of SHARCNET systems or the latest SHARCNET news, events and workshops, you can turn to RSS feeds on your desktop computer.

The term RSS may stand for Really Simple Syndication, RDF Site Summary, or Rich Site Summary depending on the version. Written in the format of XML, RSS feeds are used by websites to syndicate their content. RSS feeds allow you to read through the news you want, at your own convenience. The messages will show up on your desktop, e.g. using Mozilla Thunderbird, an integrated mail client software, as soon as there is an update.

Can I find my job submission history?

Yes. Your every single job submission is recorded in a database. Each record contains the command, the submission time, the start time, the completion time, exit status of your program (i.e. succeeded or failed), number of CPUs used, system, and so on.

You may review the history by logging in to your web account.

How are jobs scheduled?

Job scheduling is the mechanism which selects waiting jobs ("queued") to be started ("dispatched") on nodes in the cluster. In all cases, production SHARCNET clusters are "exclusively" scheduled, so that a job will have complete access to the CPUs its currently running on (it may be pre-empted during the course of it's execution, as noted below). Details as to how jobs are scheduled follow below.

How long will it take for my queued job to start?

In practice, if your potential job does not cause you to exceed your certification limit, and there are enough free processors to run your job and no else has any jobs queued, then you should expect your jobs to start immediately. Once there are more jobs queued than available resources, the scheduler will attempt to arbitrate between the CPU demands of all queued jobs. This arbitration happens in the following order: Dedicated Resource jobs first, then "test" jobs (which may also preempt normal jobs), and finally normal jobs. Within the set of pending normal jobs, the scheduler will prefer jobs belonging to groups which have high Fairshare priority (see below).

For information on expected queue wait times, users can check the Recent Cluster Statistics table in the web portal. This is historical data and may not correspond to the current job load on the cluster, but it is useful for identifying longer-term trends.

What determines my job priority relative to other groups?

The priority of different jobs on the systems is ranked according to the usage by the entire group, across SHARCNET. This system is called Fairshare.

Fairshare is based on a measure of recent (currently, past 2 months) resource usage. All user groups are ranked into 5 priority levels, with the heaviest users given lowest priority. You can examine your group's recent usage and priority here: Research Group's Usage and Priority.

This system exists to allow for new and/or light users to get their jobs running without having to wait in the queue while more resource consuming groups monopolize the systems.

Why did my job get suspended?

Sometimes your job may appear to be in a running state, yet nothing is happening and it isn't producing the expected output. In this case the job has probably been suspended to allow another job to run in it's place briefly.

Jobs are sometimes preempted (put into a suspended state) if another higher-priority job must be started. Normally, preemption happens only for "test" jobs, which are fairly short (always less than 1 hour). After being preempted, a job is resumed (and the intervening period is not counted as usage.)

On contributed systems, the PI who contributed equipment and their group have high-priority access and their jobs will preempt non-contributor jobs if there are no free processors.

Some specific scheduling idiosyncrasies:

One problem with cluster scheduling is that for a typical mix of job types (serial, threaded, various-sized MPI), the scheduler will rarely accumulate enough free CPUs at once to start any larger job. When an job completes, it frees N cpus. If there's an N-cpu job queued (and of appropriate priority), it'll be run. Frequently, jobs smaller than N will start instead. This may still give 100% utilization, but each of those jobs will complete, probably at different times, effectively fragmenting the N into several smaller sets. Only a period of idleness (lack of queued smaller jobs) will allow enough cpus to collect to let larger jobs run.

Narwhal uses a form of reservation scheduling to address this: for a fixed period of time, the scheduler will accumulate idle cpus in an attempt to run the currently highest-priority job. If it takes too long, other jobs will be started, and the accumulation will begin again. The accumulation period is chosen to optimize the chances of running jobs of a target size (around 32 cpus).

Requin is intended to enable "capability", or very large jobs. Rather than eliminating the ability to run more modest job sizes, Requin is configured with a weekly cycle: every Monday at noon, all previously running jobs will have finished and large queued jobs can start. One implication of this is that no job over 1 week can be run (and a 1-week job will only have one chance per week to start). Shorter jobs can be started at any time, but only a 1-day job can be started on Sunday, for instance.

Note that all clusters now enforce runtime limits - if the job is still running at the end of the stated limit, it will be terminated. (Before December 1 2008, only Narwhal would enforce runtime limits.)

Gaussian jobs on Bull are also scheduled somewhat differently: they are given a separate queue, which provides slightly higher priority to groups who have bought into the SHARCNET Gaussian license.

Finally, when running DDT or OPT (debugger and profiler), it's normal to use the test queue. If you need to run such jobs longer than 1 hour, and find the wait times too high when using the normal queues, let us know (open a ticket). It may be that we need to provide a special queue for these uses - possibly preemptive like the test queue.

Programming and Debugging

What is MPI?

MPI stands for Message Passing Interface, a standard for writing portable parallel programs which is well-accepted in the scientific computing community. MPI is implemented as a library of subroutines which is layered on top of a network interface. The MPI standard has provided both C/C++ and Fortran interfaces so all of these languages can use MPI. There are several MPI implementations, including OpenMPI and MPICH. Specific high-performance interconnect vendors also provide their own libraries - usually a version of MPICH layered on an interconnect-specific hardware library. For SHARCNET Alpha clusters, the interconnect is Quadrics, which provides MPI and a low-level library called "elan". for Myrinet, the low-level library is MX or GM.

In addition to C/C++ and Fortran versions of MPI, there exist other language bindings as well. If you have any special needs, please contact us.

What is OpenMP?

OpenMP is a standard for programming shared memory systems using threads with compiler directives instrumented in the source code. It provides a higher-level approach to utilizing multiple processors within a single machine while keeping the structure of the source code as close to the conventional form as possible. OpenMP is much easier to use than the alternative (Pthreads) and thus is suitable for adding modest amounts of parallelism to pre-exiting code. Because OpenMP is a set of programs, your code can still be compiled by a serial compiler and should still behave the same.

OpenMP for C/C++ and Fortran are supported by many compilers, including the PathScale and PGI for Opterons, and the Intel compilers for IA32 and IA64 (such as SGI's Altix.). OpenMP support has been provided in the GNU compiler suite since v4.2 (OpenMP 2.5), and starting with v4.4 supports the OpenMP 3.0 standard.

How do I run an OpenMP program with multiple threads?

An OpenMP program uses a single process with multiple threads rather than multiple processes. On SMP systems, threads will be scheduled on available processors, thus run concurrently. In order for each thread to run on one processor, one needs to request the same number of CPUs as the number of threads to use. This is done differently on different systems at SHARCNET where queueing systems are used. For instance, on Tru64 Alpha clusters, to run an OpenMP program foo that uses four threads with sqrun command, use the following

sqrun -q threaded -n 4 ./foo

The option -n 4 specifies to reserve 4 CPUs per process. The same command applies to all systems which support sqrun (SQ).

What mathematics libraries are available?

Every system has the basic linear algebra libraries BLAS and LAPACK installed. Normally, these interfaces are contained in vendor-tuned libraries. On Intel-based (Xeon, Itanium2) clusters, users have the access to Intel math kernel library. On Opteron-based clusters, AMD's ACML library is available.

One may also find the GNU scientific library (GSL) useful to some point for their particular needs. The GNU scientific library is an optional package, available on any machine.

For a detailed list of libraries on each clusters, please check the documentation on the corresponding SHARCNET satellite web sites

How do I use mathematics libraries such as BLAS and LAPACK routines?

First you need to know which subroutine you want to use. You need to check the references to find what routines meet your needs. Then place calls to those routines you want in your program and compile your program to use the particular libraries that have those routines. For instance, if you want compute the eigenvalues, and optionally the eigenvectors, of an N by N real non symmetric matrix in double precision, you find the LAPACK routine DGEEV will do that. All you need to do is to have a call to DGEEV, with required parameters as specified in the LAPACK document, and compile your program to link against the LAPACK library.

f77 -o myprog main.f sub1.f sub2.f ... sub13.f -llapack

The option -llapack tells the compiler to use library liblapack.a.

If the system you are using has vendor supplied libraries that have optimized LAPACK routines, such as Intel's math kernel library MKL (libmkl.a) or AMD's ACML library (libacml.a), then use those libraries with options -lmkl or -lacml instead, as they will give you better performance. The installation directories of those vendor libraries may vary from site to site. If such a library is not installed in the standard directory /lib, /usr/lib or /usr/local/lib, then chances are you would have to specify the lookup path for the compiler. For instance, on the Itanium2 cluster Spinner, the Intel version of LAPACK in the math kernel library mkl is located in /opt/intel/mkl/lib/64, in the above example, one will use command

ifort -o myprog main.f sub1.f sub2.f ... sub13.f -L/opt/intel/mkl/lib/64 -lmkl_lapack

where ifort is the Intel Fortran compiler, the option -L/opt/intel/mkl/lib/64 -lmkl_lapack specifies the library path. Please check the local documentation at each site for details.

You should never need to copy or use the individual source code of those library routines and compile them together with your program.

My code is written in C/C++, can I still use those libraries?

Yes. Most of the libraries have C interfaces. If you are not sure about the C interface or you need assistance in using those libraries written in Fortran, we can help you out on a case to case basis.

What packages are available?

Various packages have been installed on SHARCNET clusters at users' requests. Custom installed packages include, for example, Gaussian, PETSc, R, Featflow, Gamess, Tinker, Rasmol, and Maple. Please check the SHARCNET web portal for the software packages installed and related usage information.

What interconnects are used on SHARCNET clusters?

Currently, several different interconnects are being used on SHARCNET clusters: Quadrics, Myrinet, InfiniBand and standard IP-based ethernet.

I would like to do some grid computing, how should I proceed?

Depends on what you mean by "grid computing". If you simply mean you want to queue up a bunch of jobs (MPI, threaded or serial) and have them run without further attention, then great! SHARCNET's model is exactly that kind of grid. However, we do not attempt to hide differences between clusters, such as file systems that are remote, different types of CPUs or interconnect. We do not currently attempt to provide a single queue which feeds jobs to all of the clusters. Such a unified grid would require you to ensure that your program was compiled and configured to run under Alpha Linux, Alpha Tru64, IA32 Linux, IA64 Linux, AMD64 Linux. It would also have to assume nothing about shared file systems, and it would have to be aware of the 5000x difference in latency when sending messages within a cluster versus between clusters, as well as either rely on least-common-denominator networking (ethernet) or else explicitly manage the differences between Quadrics, Myrinet, Infiniband and ethernet.

If, however, you would like to try something "unusual" that requires much more freedom than the current resource management system can handle, then, you would need to discuss the details of your plan with us for special arrangement.

Debugging serial and parallel programs

Debugger is a program which helps to identify mistakes ("bugs") in programs - either run-time, or "post-mortem" (by analyzing the core file produced by a crashed program). Debuggers can be either command-line, or GUI (graphical user interface) based. Before a program can be debugged, it needs to be (re-)compiled with a switch, -g, which tells the compiler to include symbolic information into the executable. For MPI problems on the HP XC clusters, -ldmpi includes the HP MPI diagnostic library, which is very helpful for discovering incorrect use of the API.

SHARCNET highly recommends using our commercial debugger DDT. It has a very friendly GUI, and can also be used for debugging serial, threaded, and MPI programs. A short description of DDT and cluster availability information can be found on its software page. Please also refer to our detailed Parallel Debugging with DDT PDF tutorial.

SHARCNET also provides gdb (installed on all clusters, type "man gdb" to get a list of options and see our Common Bugs and Debugging with gdb tutorial), pathdb (Optron clusters), and idb (Silky) are the command-line-based ones. The idb debugger also has a GUI (run "idb -gui").

What if I do not want a core dump

When you submit a batch job to the test queue it automatically will produce a core dump in the event that a Segmentation fault occurs.

This is controlled by the "ulimit -c" option. If you do not want a core dump you have to submit a script and in that script, specify

"ulimit -c 0"

For illustration purposes consider following simple program, residing in file simple.c:

#include <stdio.h>
main() {
    int i;
    int array[10];
    i = 500000000;
    printf("Index i = %d\n", i);

We compile above program using the command:

   gcc -g simple.c

which produces the executable a.out and then submit following script to execute the job in batch mode in the test queue:


where the sub_job script file is as follows:

sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./a.out

Above procedure produces an output file which start with following lines:

srun: error: nar315: task0: Segmentation fault (core dumped) srun: Terminating job

and a core dump file was produced.

In the case where the program is large the core file would also be very large and would take a lot of space and time to be dumped.

So, for those cases where you do not want or need the core file, you should submit the script sub_job_no_core_dump as follows:


where sub_job_no_core_dump is the following:

sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./ulimit_script

and where ulimit_script is another script:

ulimit -c 0

This time the output from ./sub_job_no_core_dump is as follows:

~/ulimit_script: line 4: 31097 Segmentation fault ./a.out srun: error: nar150: task0: Exited with exit code 139

and no core dump file was produced.

Note: All scripts must have the proper authorizations, which is obtained by issuing command:

     chmod ugo+rx <script_name>

What does my error code mean ?

When your program returns an error code it means something went wrong. You can findout the meaning of the error code by typing following command:

        man 7 signal

However, if you want to fix the problem then you should use a debugger (e.g. gdb or DDT) to locate the instructions that are causing the problem. For a brief introduction on how to use gdb see the Common Bugs and Debugging with gdb online tutorial.

What is NaN ?

NaN stands for "Not a Number". It is an undefined or unrepresentable value, typically encountered in floating point arithmitic (eg. the square root of a negative number). To debug this in your program one typically has to unmask or trap floating point exceptions. There are further details in the Common Bugs and Debugging with gdb tutorial.

How can I use double precision for GPU variables (on cluster angel) ?

To use double precision for CUDA variables you need to add the following flag to the compile command:

-arch sm_13

For further information on using CUDA please see this tutorial / online reference.

How do I compile my MPI program with diagnostic information?


This is the version of MPI used by default on the opteron clusters running the XC operating system. To get diagnostic info (useful for solving "MPI BUG" errors) compile your code with the additional -ldmpi flag.

Getting Help

I have encountered a problem while using a SHARCNET system and need help, who should I talk to?

If you have access to the Internet, we encourage you to use the problem ticketing system (described in detail below) through the web portal. This is the most efficient way of reporting a problem as it minimizes email traffic and will likely result in you receiving a faster response than through other channels.

You are also welcome to contact system administrators and/or high performance technical computing consultants at any time. You may find their contact information on the directory page.

SHARCNET Problem Ticket System

What is a "problem ticket system"?

This is a system that allows anyone with a SHARCNET account to start a persistent email thread that is referred to as a "problem ticket". The thread is stored indefinitely by SHARCNET and can be consulted by any SHARCNET user in the future. When a user submits a new ticket it will be brought to the attention of an appropriate and available SHARCNET staff member for resolution.

You can find the SHARCNET ticket system here, or by logging into our website and clicking on "Help" then "Problems" in the top left-hand-side menu.

How do I search for existing tickets ?

Type a meaningful string into the search box when logged into the SHARCNET web portal. You can find this text entry box beside the Go button on the top right-hand-side of the page in the web portal.

It is recommended that one use specific words when searching, for example the exit code returned in your job output, or the error message produced when attempting a command. Use of common search terms may produce too many results and this coupled with the lack of sophisticated ranking for results means your search will likely be misleading or time consuming if you have to sift through many results by hand.

What do I need to specify in a ticket ?

If you do not find any tickets that deal with you current problem (as illustrated above) then you should ensure you include the following information, if relevant, when submitting a ticket:

  1. use a concise and unique Subject for the ticket
    • this makes it easier to identify in search results, for example
  2. select sensible values for the System Name and Category drop down boxes
    • this helps guide your ticket to the right staff member as quickly as possible
  3. in the Comment text entry box:
    1. if the problem pertains to a job report the jobid associated with the job
      • this is an integer that is returned by sqsub when you submit the job
      • you can also find a listing of your recent jobs (including their jobid) in the web portal at the bottom of this page
    2. report the exact commands necessary to duplicate the problem, as well as any error output that helps identify the problem
      • if relevant, this should include how the code is compiled, how the job is submitted, and/or anything else you are doing from the command line relating to the problem
    3. if this ticket relates to another ticket, please specify the associated ticket number(s)
    4. if you'd like for a particular staff member to be aware of the ticket, mention them
  4. if you want to expedite resolution you can make your files publicly available ahead of time
    • you should include any relevant files required to duplicate the problem
    • if you're not comfortable with changing your own file permissions, in Comment you can request that a staff member provide a location where you can copy the necessary files, or arrange file transfer via other means. If your code is really sensitive you may have to arrange to meet in person to show them the problem.

How do I submit a ticket?

We recommend that you read the above section on what to specify in a ticket before submitting a new ticket.

Users can submit a problem ticket describing an issue, problem or other request and they will then receive messages concerning the ticket via email (the ticket can also be consulted via the web portal).

You can also open a ticket automatically by emailing with the email address associated with your SHARCNET account.

How do I give other users access to my files ?

The following instructions provide commands that you can use to make your files available to all SHARCNET users, including staff. It assumes that you are sharing files in your /home directory, but you can change this to /work or /scratch or etc. with the same effect.

In the below, instead of <your-subdirectory> , type the name of the subdirectory where the files you want to give access to are located:

(1) Go to your home directory by running:


(2) Authorize access to the home directory by running:

       chmod o+x  .

(3) Authorize access to <your-subdirectory> by running:

       chmod -R o+rX <your-subdirectory>

Note that if the directory you wish to share is nested multiple directories below your home directory (eg. you want to give access to ~/dir1/dir2, but not ~/dir1) you will have to run:

       chmod o+x .

in any of the intervening directories if they are not already globally accessible.

To restrict all public access due to these changes one can simply run the following commands:

       chmod o-x .

I am new to parallel programming, where can I find quick references at SHARCNET?

SHARCNET has a number of training modules on parallel programming using MPI, OpenMP, pthreads and other frameworks. Each of these modules has working examples that are designed to be easy to understand while illustrating basic concepts. You may find these along with copies of slides from related presentations and links to external resources on the Main Page of this training/help site.

I am new to parallel programming, can you help me get started with my project?

Absolutely. We will be glad to help you from planning the project, architecting your application programs with appropriate algorithms and choosing efficient tools to solve associated numerical problems to debugging and analyzing your code. We will do our best to help you speed up research.

Can you install a package on a cluster for me?

Certainly. We suggest you make the request by sending e-mail to, or opening a problem ticket with the specific request.

I am in a process of purchasing computer equipment for my research, would you be able to provide technical advice on that?

If you tell us what you want, we may be able to help you out.

Does SHARCNET have a mailing list or user group?

Yes. You may subscribe to one or more mailing lists on the email list page available once you log into the web portal.

Does SHARCNET provide any training on programming and using the systems?

Yes. SHARCNET provides workshops on specific topics from time to time and offers courses at some sites. Every June, SHARCNET holds an annual summer school with a variety of in-depth, hands-on workshops. All materials from past workshops/presentations can be found on the SHARCNET web portal.

How do I watch tickets I don't own?

There are two ways. First, to view the tickets of user "USERID", type the URL like below:

,where USERID is the user you want to see. In the "Actions" column, click on "watch" for problems that you want to follow. This should enable you to receive notifications if any of the problems you are "watching" are updated.

If you want to do the same thing for tickets posted by other members in your group, just access their userpage (listed on )

The other way is to use 'search box' in the SHARCNET website. By typing the ticket number or userid, you can do similar thing described above.

Research at SHARCNET

Where can I find what other people do at SHARCNET?

You may find some of the research activities at SHARCNET by visiting our research initiatives and researcher profile pages.

I have a research project I would like to collaborate on with SHARCNET, who should I talk to?

You may contact SHARCNET head office or contact members of the SHARCNET technical staff.

How can I contribute compute resources to SHARCNET so that other researchers can share it?

Most people's research is "bursty" - there are usually sparse periods of time when some computation is urgently needed, and other periods when there is less demand. One problem with this is that if you purchase the equipment you need to meet your "burst" needs, it'll probably sit, underutilized, during other times.

An alternative is to donate control of this equipment to SHARCNET, and let us arrange for other users to use it when you are not. We prefer to be involved in the selection and configuration of such equipment. Some of SHARCNET's most useful clusters were created this way — Goblin and Wobbie were purchased with user contributions. Our promise to contributors is that as much as possible, they should obtain as much benefit from the cluster as if it were not shared. Owners get preferential access. Naturally, owners are also able to burst to higher peak usage, since their equipment has been pooled with other contributions. (Technically, SHARCNET cannot itself own such equipment — it remains owned by the institution in question, and will be returned to the contributor upon request.) If you think this model will also work for you and you would like to contribute your computational resource to help the research community at SHARCNET, you can contact us for such arrangement.

I do not know much about computation, nor is it my research interest. But I am interested in getting my research done faster with the help of the high performance computing technology. In other words, I do not care about the process and mechanism, but only the final results. Can SHARCNET provide this type of help?

We will be happy to bring the technology of high performance computing to you to accelerate your research, if at all possible. If you would like to discuss your plan with us, please feel free to contact our high performance computing specialists. They will be happy to listen to your needs and are ready to provide appropriate suggestions and assistance.

I am a faculty member from non-SHARCNET member institution. Could I apply for an account and sponsor my student's accounts?

Whether or not a Canadian faculty researcher is part of a SHARCNET member institution has little bearing on their account - the main difference is that users who are from institutions that are external to the consortium are reviewed annually by the Scientific Director rather than a local SHARCNET site representative. External faculty can sponsor accounts for students in the same fashion as faculty at member institutions. The biggest difference is lack of access to our Dedicated Resource program and other fellowship programs.

Fellowships at SHARCNET

I heard SHARCNET offers fellowships, where can I get more information?

You may find additional information regarding fellowships and other dedicated resource opportunities on the Research Fellowships page of the web portal. A dedicated online FAQ is also available.

I would like to do some research at SHARCNET as a visiting scholar, how should I apply?

In general, you will need to find a hosting department or a person affiliated with one of the SHARCNET institutions. You may also contact us directly for more specific information.

I would like to send my students to SHARCNET to do some work for me. How should I proceed?

See above.

Contacting SHARCNET

How do I contact SHARCNET for research, academic exchanges, and technical issues?

Please contact our Scientific Director or check for your specific issue in this FAQ.

How do I contact SHARCNET for business development, education and other issues?

Please contact SHARCNET head office.

How to Acknowledge SHARCNET in Publications

How do I acknowledge SHARCNET in my publications?

We recommend the following:

This work was made possible by the facilities of the Shared Hierarchical 
Academic Research Computing Network ( and Compute/Calcul Canada.

I've seen different spellings of the name, what is the standard spelling of SHARCNET?

We suggest the spelling SHARCNET, all in upper case.

What types of research programs / support are provided to the research community?

Our overall intent is to provide support that can both respond to the range of needs that the user community presents and help to increase the sophistication of the community and enable new and larger-in-scope applications making use of SHARCNET's HPC facilities. The range of support can perhaps best be understood in terms of a pyramid:

Level 1

At the apex of the pyramid, SHARCNET supports a small number of projects with dedicated programmer support. The intent is to enable projects that will have a lasting impact and may lead to a "step change" in the way research is done at SHARCNET. Inter-disciplinary and inter-institutional projects are particularly welcomed. Projects can expect to receive support at the level of 2 to 6 months direct support per year for one to two years. Programming time is allocated through a competitive process. See the guidelines.

Level 2

The middle layers of support are provided through a number of initiatives.

These include:

  • Programming support of more modest duration (several days to one month engagement, usually part time)
  • Training on a variety of topics through workshops, seminars and online training materials
  • Consultation. This may include user-initiated interactions on particular programs, algorithms, techniques, debugging, optimization etc., as well as unsolicited help to ensure effective use of SHARCNET systems
  • Site Leaders play an important role in working with the community to help researchers connect with SHARCNET staff and to obtain appropriate help and support.

Level 3

The base level of the pyramid handles the very large number of small requests that are essential to keeping the user community working effectively with the infrastructure on a day-to-day basis. Several of these can be answered by this FAQ; many of the issues are presented through the ticketing system. The support is largely problem oriented with each problem being time limited.