From Documentation
Revision as of 16:35, 22 November 2012 by Hahn (Talk | contribs) (How long will it take for my queued job to start?)

Jump to: navigation, search
Sharcnet logo.jpg
Knowledge Base / Expanded FAQ

This page is a comprehensive collection of essential information needed to use SHARCNET, gathered conveniently on a single page of our Help Wiki. If you are a new SHARCNET user, this page most likely contains all you need to get going on SHARCNET. However, there is much more information in this Help Wiki. Please use the search box to find pages that may be relevant to you. You can also go to the Main Page of this wiki for a general table of contents. Finally, you can also look at the list of all articles in this Help Wiki or a list of all categories.


Contents

About SHARCNET

What is SHARCNET?

SHARCNET stands for Shared Hierarchical Academic Research Computing Network. Established in 2000, SHARCNET is the largest high performance computing consortium in Canada, involving seventeen universities and colleges across southern, central and northern Ontario.

SHARCNET is a member consortium in the Compute/Calcul Canada national HPC platform.

Where is SHARCNET?

The main office of SHARCNET is located in the Western Science Centre at The University of Western Ontario. The SHARCNET high performance clusters are installed at a number of the member institutions in the consortium and operated by SHARCNET staff across different sites.

What does SHARCNET have?

The infrastructure of SHARCNET consists of a group of 64-bit high performance Opteron and Xeon clusters with close to 20,000 CPUs along with a group of storage units deployed at a number of universities and colleges. Those high performance clusters are interconnected with each other through the Ontario Research Innovation Optical Network (ORION) with a private, dedicated connection running at 10 Gigabits per second on major links (some links have 1 Gigabits per second connections). SHARCNET clusters run the Linux operating system.

What can I do with SHARCNET?

If you have a program that takes months to run on your PC, you could probably run it within a few hours using hundreds of processors on the SHARCNET clusters, provided your program is inherently parallelisable. If you have hundreds or thousands of test cases to run through on your PC or computers in your lab, then with hundreds of processors running those cases independently will significantly reduce your test cycles .

If you have used beowulf clusters made of commodity PCs, you may notice a performance improvement on SHARCNET clusters which have high-speed Quadrics, Myrinet and Infiniband interconnects, as well as SHARCNET machines which have large amounts of memory. Also, SHARCNET clusters themselves are connected through a dedicated, private connection over the Ontario Research Innovation Optical Network (ORION).

If you have access to other super computing facilities at other places and you wish to share your ideas with us and SHARCNET users, please contact us. Together we can make SHARCNET better.

Who is running SHARCNET?

The daily operation and development of SHARCNET computational facilities is managed by a group of highly qualified system administrators. In addition, we have a team of high performance technical computing consultants, who are responsible for technical support on libraries, programming and application analysis.

How do I contact SHARCNET?

For technical inquiries, you may send E-mail to help@sharcnet.ca, or contact your local system administrator or HPC specialist. For general inquiries, you may contact the SHARCNET main office.

Getting an Account with SHARCNET and Related Issues

What is required to obtain a SHARCNET account

Anyone who would like to use SHARCNET may apply for an account. Please bear in mind the following:

  • There are no shared/group accounts, each person who uses SHARCNET requires their own account and must not share their password
  • Applicants who are not a primary investigator (eg. students) require an account sponsor who must already have a SHARCNET account. This is typically one's supervisor.
  • There is no fee for academic access, but account sponsors are responsible for reporting their research activities to Compute Canada, and all academic SHARCNET users must obtain a Compute Canada account before they may apply for a SHARCNET account.
  • All SHARCNET users must read and follow the policies listed here

How do I apply for an account?

Applying for an account is either done through the Compute Canada Database (for academic users) or by contacting SHARCNET (for non-academic use). Each case is outlined below.

Note: If you have an existing SHARCNET account, do not apply for a new account. Each person should only ever have one account, which can be transferred to new groups, etc. See this knowledge base entry for further details.

Academic Users

Faculty, students, and staff at academic institutions, as well as their research collaborators, can apply for their SHARCNET account as follows:

  1. Acquire a Compute Canada Identifier (CCI) by applying for a Compute Canada account here.
    • If you are not a primary investigator (e.g. students, collaborators, research assistants) your sponsor will have to have an account already, and you will need to know their CCRI identifier to complete the application form. Ask them for it or email help@sharcnet.ca .
    • If you are not at a Canadian institution you can still get a CCI (and hence a SHARCNET account) as long as your sponsor has a CCI.
      • In this case specify Institution: Other... in the Compute Canada account application form.
  2. You will be sent a message to confirm your email address.
    • Check your spam folder if you don't see it.
    • Click on the link in the message to confirm.
  3. Once you confirm your email address, an authorizing authority will be sent an email requesting that they confirm your application.
    • If you are applying for a Faculty position account from a SHARCNET member institution then your SHARCNET Site Leader will authorize your account.
    • If your sponsor is from a SHARCNET member institution they will authorize your account.
      • Sponsors may need to check their spam folder to find the confirmation request.
    • If your account sponsor is not located at a SHARCNET member institution your local consortium in Compute Canada will approve your application based on their policy.
  4. Once your Compute Canada account is authorized, log back in to the Compute Canada Database where you originally applied for an account.
  5. Navigate to the Consortium Accounts Page, it is a link at the bottom of your login page named Apply for consortium account.
  6. Click the Apply button next to the word SHARCNET.
  7. Follow the instructions on the SHARCNET website to complete the application.
    • If you are not a primary investigator, your sponsor needs a SHARCNET account (in addition to their Compute Canada account) before you can complete the SHARCNET application.
  8. Your sponsor or SHARCNET Site Leader (see here for a list of email contacts for each site) will be contacted to approve your SHARCNET account.
  9. A SHARCNET staff person will review your application before you receive cluster access.

You will be sent an email containing either your new account credentials and information on getting started with SHARCNET or the outcome of your application once it has been processed. Please note that it may take up to 1 business day to process your application once it has been successfully submitted (including all authorizations).

If you are having trouble with the above instructions please contact help@sharcnet.ca for assistance.

Non-academic Users

All other account requests (commercial access, non-academic, ineligible for a CCI, etc.) should be sent to accounts@sharcnet.ca. These are dealt with on per-case basis and approved following consultation with SHARCNET Administration. If you are working outside of academia we recommend you read our Commercial Access Policy which can be found in the SHARCNET web portal here.

How do I update my account?

You do not need to update your SHARCNET account - presently it is automatically activated or deactivated based on the status of your primary role with Compute Canada (your primary CCRI). In other words, all SHARCNET users must comply with Compute Canada reporting and account renewal requirements to maintain access to their SHARCNET account.

Background

Prior to 2012 all primary investigators were required to report information about their SHARCNET-related research programs to SHARCNET on an annual basis. In 2012 this function was superseded by centralized reporting at Compute Canada. All academic SHARCNET users (both sponsored accounts and primary investigators) must now participate in annual reporting to Compute Canada as part of our national account renewal process. You will be notified in advance of the renewal / reporting process via email with instructions.

Account Deactivation

Failure to report the requested information to Compute Canada will result in your Compute Canada account being deactivated, and consequently, your SHARCNET account, so for continued access to SHARCNET it is imperative that you renew your Compute Canada account.

Further Information and Contacts

For further information please see the Compute Canada FAQ.

If anything is unclear or if you have any questions about the Compute Canada account renewal / reporting process please email accounts@computecanada.org. If you have any questions about account renewals that directly pertain to SHARCNET please email help@sharcnet.ca.

I am changing supervisor or I am becoming faculty, and I already have a SHARCNET account. Should I apply for a new account?

No, you should apply for a new role (CCRI) and notify SHARCNET that you would like to make it your primary role. The process is as follows:

  1. apply for a new CCRI at the Compute Canada Database under your new supervisor/position
  2. once you have the new CCRI and it has been authorized, email help@sharcnet.ca requesting that your new role be made the primary

This should result in your SHARCNET account automatically changing status/sponsor as appropriate after some update delay.

Notes:

  1. If you are changing supervisor then your new supervisor must have a SHARCNET account.
  2. Your files will retain their old group ownership after the switch (although you may change them to your new group if you wish with the chgrp command).
  3. By default the email address associated with your account will not change to the one associated with your new role. If you wish to use your new email address you have to update your Contact Information at the Compute Canada Database to make your new address Primary.

I have an existing SHARCNET account and need to link it to a new Compute Canada account, how do I do that?

You first need to get a Compute Canada Role Identifier (CCRI) (steps 1-3) and then notify SHARCNET that you would like to link your Compute Canada Account (CCI) to your existing SHARCNET account.

Important Notes: If you are a sponsored user then your sponsor must complete this process before you can proceed to step 4.

  1. submit a Compute Canada Account Application
    • creating this account will also create a CCRI
    • Note: if you have an account sponsor you will need their CCRI to apply
  2. confirm your email by clicking on the link in the email message you receive
    • you may have to check your spam folder for this message as it is automatically generated
  3. wait for your sponsor or siteleader to approve your account
    • your sponsor may need to check his or her spam folder to find the confirmation email.
    • depending on your local consortium, your consortium may need to approve your account after your sponsor approves it.
    • when approved, you'll receive email indicating your account is active
    • it will also contain your new CCRI
  4. check your SHARCNET account profile to see if your Active Role is your CCRI, and that your old SHARCNET-only role (sn*) is listed in Inactive Roles
    • if you weren't linked automatically, or if you no longer have access to your old SHARCNET account profile, email help@sharcnet.ca to request that your Compute Canada and SHARCNET accounts be linked

After linking your accounts there may be a modest delay before your status is updated in the SHARCNET account system. If you are logged into any clusters you should logout and back in again to update your ldap group membership, otherwise when you submit jobs they may fail with warning messages containing bsub status=65280. After linking accounts, SHARCNET will utilize your primary email address on file with Compute Canada for all communications. If your sponsor or position has changed since you had last accessed your SHARCNET account then you should review the notes in the above section concerning change of sponsor/supervisor.

If you encounter problems please email help@sharcnet.ca.

What is a role / CCRI ?

Each person may have one or more roles that are associated with each of their current and past positions. These various roles ultimately link back to ones CCI (Compute Canada Identifier). If the roles are created through Compute Canada they are referred to as a CCRI (Compute Canada Role Identifier), although other roles pre-dating Compute Canada also exist.

In practice you only need to be concerned with your role (and the appropriate role from your sponsor) when applying for accounts, running jobs in particular projects associated with a particular group/sponsor, or when viewing the SHARCNET web portal with multiple roles (you may see different information in the web portal depending on which role you have selected to be active).

For further information about roles please see the SHARCNET-specific role information here and the more general Compute Canada specific information here.

Can I just have a cluster account without having a web portal account?

No. The web portal account is an online interface to your account in our user database. It provides a way of managing your information and keeping track of problems you may encounter.

Can I E-mail or call to open an account?

No, please follow the instructions above.

OK, I've seen and heard the word "web portal" enough, what is it anyway?

A web portal is a web site that offers online services. Usually a web portal has a database at the backend, in which people can store and access personal information, but may involve other software services like this wiki. At SHARCNET, registered users can login to the web portal, manage their profiles, submit and review programming and performance related problems, look-up solutions to problems, contribute to our wiki, and assess their SHARCNET usage, amongst other things.

My supervisor forgot all about his/her username/CCRI, so my application can't go through, what should I do?

Please have them send an E-mail to help@sharcnet.ca and we will re-inform them of their login credentials.

My supervisor does not use SHARCNET, why is my supervisor asked to have an account anyway?

Your supervisor's account ID is used to identify which group your account belongs to. We account for all usage and provide default at the group level.

Is there any charge for using SHARCNET?

SHARCNET is free for all academic research. If you are working outside of academia we recommend you read our Commercial Access Policy which can be found in the SHARCNET web portal here.

I forgot my password

You can reset your password here, or by clicking the "Forget password" link after trying to sign-in.

I forgot my username

If you forget your username, please send an E-mail to help@sharcnet.ca. Your username for the web portal and cluster account are the same.

My account has been disabled (so i cannot login). What should I do ?

At present all academic SHARCNET accounts are automatically enabled/disabled based on the status of your corresponding Compute Canada roles. If your SHARCNET account is disabled it was most likely due to your Compute Canada account becoming expired as a result of not completing the Compute Canada account renewal / reporting process. Please see this section for instructions on keeping your account up to date.

If you are sure you completed the Compute Canada renewal process, or if you are a non-academic SHARCNET user, please email help@sharcnet.ca and we can help you with re-enabling your account.

Note: Disabled SHARCNET faculty account holders can still log in to the web portal even if their account is disabled, but sponsored SHARCNET accounts cannot log into any SHARCNET cluster or the web portal. If a sponsor is disabled all of their sponsored accounts are also disabled.

How do I change the email address associated with my account?

If you wish to use a new email address you have to update your Contact Information at the Compute Canada Database.

I no longer want my SHARCNET account

If you would like to cease using SHARCNET (including access to all systems and list email) email help@sharcnet.ca. Please let us know if you'd like to disable your corresponding Compute Canada role (resulting in all it's associated Compute Canada consortia accounts being disabled as well) or if you'd just like to disable your SHARCNET account independent of your other consortia accounts.

You should only request this if you want your account disabled *now* - if you do not complete the annual renewal process at Compute Canada your account will eventually be deactivated automatically.

The Acceptable Use Policy, in particular pt. 36, outlines our policy in the event that an account is disabled.

You may have your account re-enabled by emailing help@sharcnet.ca.

Logging in to Systems, Transferring and Editing Files

How do I login to SHARCNET?

There is no single point of entry at present. "Logging in to SHARCNET" means you login to one of the SHARCNET systems. A complete list of SHARCNET systems can be found on our facilities page.

Unix/Linux/OS X

To login to a system, you need to use an Secure Shell (SSH) connection. If you are logging in from a UNIX-based machine, make sure it has an SSH client (ssh) installed (this is almost always the case on UNIX/Linux/OS X). If you have the same login name on both your local system and SHARCNET, and you want to login to, say, saw, you may use the command:

ssh saw.sharcnet.ca

If your SHARCNET username is different from the username on your local systems, then you may use either of the following forms:

ssh saw.sharcnet.ca -l username
ssh username@saw.sharcnet.ca

If you want to eastablish an X window connection so that you can use graphics applications such as gvim and xemacs, you can add a -Y to the command:

ssh -Y username@saw.sharcnet.ca

This will automatically set the X DISPLAY variable when you login.

Windows

If you are logging from a computer running Windows and need some pointers we recommend consulting our SSH tutorial.

What is the difference between Login Nodes and Development Nodes?

Login Nodes

Most of our clusters have distinct login nodes associated with them that you are automatically redirected to when you login to the cluster (some systems are directly logged into, eg. SMPs and smaller specialty systems). You can use these to do most of your work preparing for jobs (compiling, editing configuration files) and other low-intensity tasks like moving and copying files.

You can also use them for other quick tasks, like simple post-processing, but any significant work should be submitted as a job to the compute nodes.

Here is an example of logging in and being redirected to a saw login node, in this case saw-login1:

localhost:~ sn_user$ ssh saw.sharcnet.ca
Last login: Fri Oct 14 22:38:40 2011 from localhost.your_institution.ca

Welcome to the SHARCNET cluster Saw.
Please see the following URL for status of this and other clusters:
https://www.sharcnet.ca/my/systems


[sn_user@saw-login1 ~]$ hostname
saw-login1 

Development Nodes

On some systems there are also development nodes which can be used to do slightly more resource intensive, interactive work. For the most part these are identical to cluster login nodes, however they are not visible outside of their respective cluster (one can only reach them after logging into a login node) and they have more modest resource limits in place, allowing for the ability to do quick interactive testing outside of the job queuing system. Please see the help wiki pages for the respective clusters, Orca and Kraken, for further details on how one can use these nodes.

How can I suspend and resume my session?

The program screen can start persistent terminals from which you can detach and reattach. The simplest use of screen is

screen -R

which will either reattach you to any existing session or create a new one if one doesn't exist. To terminate the current screen session, type exit. To detach manually (you are automatically detached if the connection is lost) press ctrl+a followed by d, you can the resume later as above. Note that ctrl+a is screen's escape sequence, so you have to do ctrl+a followed by a to get the regular effect of pressing ctrl+a inside a screen session (e.g., moving the cursor to the start of the line in a shell).

For a list of other ctrl+a key sequences, press ctrl+a followed by ?. For further details and command line options, see the screen manual (or type man screen on any of the clusters).

What operating systems are supported?

UNIX in general. Currently, Linux is the only operating system used within SHARCNET.

What makes a cluster different than my UNIX workstation?

If you are familiar with UNIX, then using a cluster is not much different from using a workstation. When you login to a cluster, you in fact only log in to one of the cluster nodes. In most cases, each cluster node is a physical machine, usually a server class machine, with one or several CPUs, that is more or less the same as a workstation you are familiar with. The difference is that these nodes are interconnected with special interconnect devices and the way you run your program is slightly different. Across SHARCNET clusters, you are not expected to run your program interactively. You will have to run your program through a queueing system. That also means where and when your program gets to run is not decided by you, but by the queueing system.

Which cluster should I use?

Each of our clusters is designed for a particular type of job. Our cluster map shows which systems are suitable for various job types.

What programming languages are supported?

Those primary programming languages such as C, C++ and Fortran are supported. Other languages, such as Java, Pascal and Ada, are also supported, but with limited technical support from us. That means, if your program is written in any language other than C, C++ and Fortran, and you encounter a problem, we may or may not be able solve it within a short period of time.

How do I organize my files?

Our experience is that when large amounts of storage are available it is too easy to lose track of files and let stale copies accumulate. The number of files that one can truly manage is also fairly modest and does not scale over time, or with availability of storage. Keeping one's files organized is important, and helps to ensure that important files are safe, shared storage resources are not wasted, and one's jobs can run as quickly as possible. To best meet a range of storage needs, SHARCNET provides a number of distinct storage pools that are implemented using a variety of file systems, servers, RAID levels and backup policies. These different storage locations are summarized as follows:

place quota expiry access purpose backed-up?
/home 1 GB none unified sources, small config files Yes
/work 1 TB* none unified* active data files No
/scratch none 4 months per-cluster temporary files, checkpoints No
/tmp none 2 days per-node node-local scratch No
archive none none unified (login nodes only) long term data archive No

Note: * May be less and not unified on some of our clusters (eg. requin and some of the specialty systems), type "quota" when you log into a cluster for up to date information. "unified" means that when you login, regardless of cluster, you will always see the same directory.

  • The quota column indicates if the file system has a per-user limit to the amount of data they can store.
  • The expiry column indicates if the file system automatically deletes old files and the timescale for deletion.
  • The access column indicates the scope, or availability of the file system.

Best storage to use for jobs

Since /home is remote on most clusters and is used frequently by all users, it's important that it not be used significantly for jobs (eg. reading in a small configuration file from /home is ok - writing repeatedly to many different files in /home during the course of your jobs is not).

One can do significant I/O to /work from jobs, but it is also remote to most clusters. For this reason, to obtain the best file system throughput you should use the /scratch file system. In some cases jobs may be able to make use of /tmp for local caching, but it is not recommended as a storage target for regular output.

Cluster-local Scratch storage

/scratch has no quota limit - so you can put as much data in /scratch/<userid> as you want, until there is no more space. The important thing to note though, is that all files on /scratch that are over 122 days old will be automatically deleted (please see this knowledge base entry for details on how /scratch is purged of old files).

Backups

Backups are in place for your home directory ONLY. Scratch and global work are not backed up. In general we store one version of each file for the previous 5 working days, one for each of the 4 previous weeks, and one version per month before that. Backups began in September 2006.

Node-Local Storage

/tmp may be unavailable for use on clusters where there are no local disks on the compute nodes. Users should try to use /scratch instead, or email help@sharcnet.ca to discuss using node-local storage.

Archival Storage

To backup large volumes of data that don't need to stay available on global work or local scratch use the /archive filesystem.

It can be accessed at /archive/<userid> from any system's login nodes. This is a good place to store results for posterity (ie. the work has been published and you just need a record), as well as for reliability, as there have been instances where data was lost on /gwork and /scratch. It wouldn't hurt to periodically back up your data to archive if it is important. We recommend using rsync to transfer data to/from /archive .

In the past we provided the archive command (archive tools) to access this storage but have since discontinued it's use. Please see the archiving FAQ entry for further details.

For users who want to learn more about optimizing I/O at SHARCNET please read Analyzing I/O Performance.

How are file permissions handled at SHARCNET?

By default, anyone in your group can read and access your files. You can provide access to any other users by following this Knowledge Base entry.

All SHARCNET users are associated with a primary GID (group id) belonging to the PI of the group (you can see this by running id username , with your username). This allows for groups to share files without any further action, as the default file permissions for all SHARCNET storage locations (Eg. /gwork/user ) allows read (list) and execute (enter / access) permissions for the group, eg. they appear as:

  [sn_user@req770 ~]$ ls -ld /gwork/sn_user
  drwxr-x---  5 sn_user sn_group 4096 Jan 25 22:01 /gwork/sn_user

Further, by default the umask value for all users is 0002, so any new files or directories will continue to provide access to the group.

Should you wish to keep your files private from all other users, you should set the permissions on the base directory to only be accessible to yourself. For example, if you don't want anyone to see files in your home directory, you'd run:

chmod 700 ~/

If you want to ensure that any new files or directories are created with different permissions, you can set your umask value. See the man page for further details by running:

man umask

For further information on UNIX-based file permissions please run:

man chmod

What about really large files or if I get the error 'No space left on device' in /gwork or /scratch?

If you need to work with really large files we have tips on optimizing performance with our parallel filesystems here.

How do I transfer files/directories to/from or between cluster?

Unix/Linux

To transfer files to and from a cluster on a UNIX machine, you may use scp or sftp. For example, if you want to upload file foo.f to cluster narwhal from your machine myhost, use the following command

myhost$ scp foo.f narwhal.sharcnet.ca:

assuming that your machine has scp installed. If you want to transfer a file from Windows or Mac, you need have scp or sftp for Windows or Mac installed.

If you transfer file foo.f between SHARCNET clusters, say from your home directory on narwhal to your scratch directory on requin, simply use the following command

[username@nar316 ~]$ scp foo.f requin:/scratch/username/

If you are transferring files between a UNIX machine and a cluster, you may use scp command with -r option. For instance, if you want to download the subdirectory foo in the directory project in your home directory on saw to your local UNIX machine, on your local machine, use command

myhost$ scp -rp saw.sharcnet.ca:project/foo .

Similarly, you can transfer the subdirectory between SHARCNET clusters. The following command

[username@nar316 ~]$ scp -rp requin:/scratch/username/foo .

will download subdirectory foo from your scratch directory on requin to your home directory on narwhal (note that the prompt indicates you are currently logged on to narwhal).

The use of -p option above will preserve the time stamp of each file. For Windows and Mac, you need to check the documentation of scp for features.

You may also tar and compress the entire directory and then use scp to save bandwidth. In the above example, first you login to narwhal, then do the following

[username@nar316 ~]$ cd project
[username@nar316 ~]$ tar -cvf foo.tar foo
[username@nar316 ~]$ gzip foo.tar

Then on your local machine myhost, use scp to copy the tar file

myhost$ scp narwhal.sharcnet.ca:project/foo.tar.gz .

Note for most Linux distributions, tar has an option -z that will compress the .tar file using gzip.

Windows

You may read the instruction using ssh client. [[1]]

How can I best transfer large quantities of data to/from SHARCNET and what transfer rate should I expect?

In general most users should be fine using scp or rsync to transfer data to and from SHARCNET systems. If you need to transfer a lot of files rsync is recommended to ensure that you do not need to restart the transfer from scratch should there be a connection failure.

In general one should expect the following transfer rates with scp:

  • If you are connecting to SHARCNET through a Research/Education network site (ORION, CANARIE, Internet2) and are on a fast local network (this is the case for most users connecting from academic institutions) then you should be able to attain sustained transfer speeds in excess of 10MB/s.
  • If you are transferring data over the wider internet you will not be able to attain these speeds as all traffic that does not enter/exit SHARCNET via R&ENet is restricted to a limited-bandwidth commercial feed. In this case one will typically see rates on the order of 1MB/s or less.

Keep in mind that filesystems and networks are shared resources and suffer from contention; if they are busy the above rates may not be attainable

If you need to transfer a large quantity of data to SHARCNET and are finding your transfer rate to be slow please contact help@sharcnet.ca to request assistance. We can provide additional tips and tools to greatly improve data transfer rates, especially to systems/users outside of Ontario's regional ORION network. For example, we've observed speed-ups from <1 MB/s using scp to well over 10 MB/s between Compute Canada systems connected via CANARIE by using specialized data-transfer programs (eg. bbcp).

How do I access the same file from different subdirectories on the same cluster ?

You should not need copy large files on the same cluster (e.g. from one user to another or using the same file in different subdirectories). Instead of using scp you might consider issuing a "soft link" command. Assume that you need access to the file large_file1 in subdirectory /work/user1/subdir1 and you need it to be in your subdirectory /work/my_account/my_dir from where you will invoke it under the name my_large_file1. Then go to that directory and type:

ln -s /work/user1/subdir1/large_file1    my_large_file1

Another example, assume that in subdirectory /work/my_account/PROJ1 you have several subdirectories called CASE1, CASE2, ... In each subdirectory CASEn you have a slightly different code but all of them process the same data file called test_data. Rather than copying the test_data file into each CASEn subdirectory, place test_data above i.e. in /work/my_account/PROJ1 and then in each CASEn subdirectory issue following "soft link" command:

ln -s ../test_data  test_data

The "soft links" can be removed by using the rm command. For example, to remove the soft link from /work/my_account/PROJ1/CASE2 type following command from this subdirectory:

rm -rf test_data

Typing above command from subdirectory work/my_account/PROJ1 would remove the actual file and then none of the CASEn subdirectories would have access to it.

How are files deleted from the /scratch filesystems?

All files on /scratch that are over 4 months old (not old in the common sense, please see below) are automatically deleted. You will be sent an email notification beforehand warning you of any filesystems (not the actual files, however) where you may have files scheduled for deletion in the immediate future.

An unconventional aspect of this system is that it does not determine the age of a file based on the file's attributes, eg. the dates reported by the stat, find, ls, etc. commands. The age of a file is determined based on whether or not it's data contents (ie. the information stored in the file) have changed, and this age is stored externally to the file. Once a file is created in /scratch/<userid> , reading it, renaming, changing the file's timestamps with the touch command, or copying it into another file are all irrelevant in terms of changing it's age with respect to the purging system. The file will be expired 4 months after it was created. Only files where the contents have changed will have their age counter "reset".

Unfortunately, there currently exists no method to obtain a listing of the files that are scheduled for deletion. This is something that is being addressed, however there is no estimated time for implementation.

If you have data in /scratch that needs to persist (eg. configuration files, important simulation output) we recommend you stage it from /gwork or archive it as appropriate.

How to archive my data?

Presently SHARCNET provides the /archive filesystem as a regularly accessible filesystem on the login nodes of our clusters (not the compute nodes!). To back up data which you'd like too keep, but don't expect to access in the foreseeable future, or to just keep a backup of data from global work or local scratch filesystems, one may use regular commands (cp, mv, rm, rsync, tar etc.), eg.

 cp /scratch/$USER/$SIMULATION /archive/$USER/$SIMULATION

Be extremely careful when deleting your data from the Archive: there is no backup for the data!

Depreciated Archive Tools

Please note: this toolset is no longer available -- /archive is now treated like a regular filesystem, see above

One can find archives that were created with archive in their root archive directory: /archive/user_name

These are just regular tar files. The content of each tar file can be listed with this command:

tar tvf file.tar

A tar file can be unpacked using

tar xvf file.tar

How can I check the hidden files in directory?

The "." at the beginning of the name means that the file is "hidden". You have to use the -a option with ls to see it. I.e. 'ls -a'.

If you want to display only the hidden files then type:

ls -d .*

Note: there is an alias which is loaded from /etc/bashrc (see your .bashrc file). The alias is defined by alias l.='ls -d .* --color=tty' and if you type:

l.

you will also display only the hidden files.

How can I count the number of files in a directory?

One can use the following command to count the number of files in a directory (in this example, your /work directory):

find /work/$USER -type f   | wc -l

It is always a good idea to archive and/or compress files that are no longer needed on the filesystem (see below). This helps minimize one's footprint on the filesystem and as such the impact they have on other users of the shared resource.

How to organize a large number of files?

With parallel cluster filesystems, you will get best I/O performance writing data to a small number of large files. Since all metadata operations on each of our parallel filesystems are handled by a single file server, depending on how many files are being accessed the server can become overwhelmed leading to poor overall I/O performance for all users. If your workflow involves storing data in a large number of files, it is best to pack these files into a small number of larger archives, e.g. using tar command

tar cvf archiveFile.tar directoryToArchive

For better performance with many files inside your archive, we recommend to use DAR (Disk ARchive utility), which is a disk analog of tar (Tape ARchive) and which can extract files from anywhere in the archive much faster than tar. Now DAR is installed as a module on SHARCNET's newer (CentOS 6) systems. You can pack files into dar archives with something like this:

module load dar
dar -s 1G -w -c archiveFile -g directoryToArchive

In this example we split the archive into 1GB chunks, and the archive files will be named archiveFile.1.dar, archiveFile.2.dar, and so on. To list the contents of the archive, you can type:

dar -l archiveFile

To temporarily extract files for post-processing into current directory, you would type:

dar -R . -O -x archiveFile -v -g pathToYourFile/fileToExtract

I am unable to connect to one of the clusters; when I try, I am told the connection was closed by the remote host

The most likely cause of this behaviour is repeated failed login attempts. Part of our security policies involves blocking the IP address of machines that attempt multiple logins with incorrect passwords over a short period of time---many brute-force attacks on systems do exactly this: looking for poor passwords, badly configured accounts, etc. Unfortunately, it isn't uncommon for a user to forget their password and make repeated login attempts with incorrect passwords and end up with that machine blacklisted and unable to connect at all.

A temporary solution is simply to attempt to login from another machine. If you have access to another machine at your site, you can shell to that machine first, and then shell to the SHARCNET system (as that machine's IP shouldn't be blacklisted). In order to have your machine unblocked, you will have to file a problem ticket as a system administrator must manually intervene in order to fix it.

NOTE: there are other situations that can produce this message, however they are rarer and more transient. If you are unable to log in from one machine, but can from another, it is most likely the IP blacklisting that is the problem and the above will provide a temporary work-around while your problem ticket is processed.

I am unable to ssh/scp from SHARCNET to my local computer

Most campus networks are behind some sort of firewall. If you can ssh out to SHARCNET, but cannot establish a connection in the other direction, then you are probably behind a firewall and should speak with your local system administrator or campus IT department to determine if there are any exceptions or workarounds in place.

SSH tells me SOMEONE IS DOING SOMETHING NASTY!?

Suppose you attempt to login to SHARCNET, but instead get an alarming message like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
fe:65:ab:89:9a:23:34:5a:50:1e:05:d6:bf:ec:da:67.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/hahn/.ssh/known_hosts:42
RSA host key for requin has changed and you have requested strict checking.
Host key verification failed. 

SSH normally tries to verify that the host you're connecting to is authentic. It does this by caching the hosts's "hostkey" in your ~/.ssh/known_hosts file. At times, it may be necessary to legitimately change its hostkey; when this happens, you may see such a message. It's a good idea to verify this with us, you may be able to check the fingerprint yourself by logging into another sharcnet system and running:

ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub 

If the fingerprint is OK, the normal way to fix the problem is to simply remove the old hostkey from your known_hosts file. You can use your choice of editor if you're comfortable doing so (it's a plain text file, but has long lines). On a unix-compatible machine, you can also use the following very small script (Substitute the line(s) printed in the warning message illustrated above for '42' here.):

perl -pi -e 'undef $_ if (++$line == 42)' ~/.ssh/known_hosts

Another solution is brute-force: remove the whole known_hosts file. This throws away any authentication checking, and your first subsequent connection to any machine will prompt you to accept a newly discovered host key. If you find this prompt annoying and you aren't concerned about security, you can avoid it by adding a text file named ~/.ssh/config on your machine with the following content:

StrictHostKeyChecking no

Ssh works, but scp doesn't!

If you can ssh to a cluster successfully, but cannot scp to to it, the problem is likely that your login scripts print unexpected messages which confuse scp. scp is based on the same ssh protocol, but assumes that the connection is "clean": that is, that it does not produce any un-asked-for content. If you have something like:

echo "Hello, Master; I await your command..."

scp will be confused by the salutation. To avoid this, simply ensure that the message is only printed on an interactive login:

if [ -t 0 ]; then
    echo "Hello, Master; I await your command..."
fi

or in csh/tcsh syntax:

if ( -t 0 ) then
    echo "Hello, Master; I await your command..."
endif

How do I edit my program on a cluster?

We provide a variety of editors, such as the traditional text-mode emacs and vi (vim), as well as a simpler one called nano. If you have X on your desktop (and properly tunneled through SSH), you can use the GUI versions (xemacs, gvim).

If you run emacs on your desktop, you can also edit a remote file from within your local emacs client using Tramp, opening and saving a file as /username@cluster.sharcnet.ca:path/file.

Compiling and Running Programs

How do I compile my programs

To make it easier to compile across all SHARCNET clusters, we provide a generic set of commands:

cc, c++, f77, f90, f95

and for MPI,

mpicc, mpic++, mpiCC, mpif77, mpif90, mpif95

Standard SHARCNET compilation wrappers have several benefits:

  • ability to provide default optimization appropriate to the cluster's CPUs
  • using -lmpi or the mpi-prefixed commands will select the necessary cluster-specific options for MPI
  • using -llapack link with the vendor-tuned LAPACK library
  • using -openmp will direct the compiler to use OpenMP

SHARCNET compile wrapper

The wrapper commands are all aliased to a common SHARCNET compile script. You can see how compile works and what options are possible by running:

less `which compile` 

on any SHARCNET system.

To see what the compile script actually executes run it with the -show flag, eg.

[snuser@nar316 test]$ cc -intel -llapack -show lapack.c
icc lapack.c -L/opt/sharcnet/intel/11.0.083/icc/mkl/lib/ -lmkl -lguide -lpthread

Here are some basic examples:

cc foo.c -o foo
cc -openmp foo.c -llapack -o foo
f90 *.f90 -lmpi -o my_mpi_prog
mpif90 *.f90 -o my_mpi_prog
f90 -mpi -c a.f90; mpif90 -c b.f90; compile a.o b.o -lmpi -o my_mpi_prog

In the first example, the preferred compiler and optimization flags will be selected, but not much else happens. In the second case, the underlying compiler's OpenMP flag (which differs among compilers) is selected, as well as linking with a system-tuned LAPACK/Blas library. in the third example, an MPI program written in Fortran90 is compiled and linked with whatever cluster-specific MPI libraries are required. The fourth example is identical except that the mpi-prefixed command is used. In the fifth example, two files are separately compiled, then linked with MPI stuff; the point is simply that even for non-linking, you need to declare that you're using MPI by either an mpi-prefixed command or -mpi or -lmpi.

These command commands will invoke the underlying compilers such as Intel or PathScale compilers, whichever are available to the system you are using. For specific compiler options, please refer to the man pages.

You aren't required to use these commands, and may not want to if you have pre-existing Makefiles, for instance. You can always add -v to see what full commands are being generated.

What compilers are available?

For a full listing of all SHARCNET compilers see the Compiler section in the web portal software pages.

You can read the man pages for the following commands to get documentation for the underlying compiler:

system compilers
Opteron systems (requin, narwhal, orca) pathcc pgcc gcc icc ifort
Xeon systems (saw, mako) icc ifort gcc
Itanium2 systems (silky) icc ifort

On Intel Itanium 2 clusters in particular, you should always use the high performance Intel compilers icc and ifort for C/C++ and Fortran code respectively, if available. They give much better performance than the generic GNU compilers on this chip.

What standard (eg. math) libraries are available?

For a full listing of all SHARCNET software libraries see the Library section in the web portal software pages.

If you need to use blas or lapack routines you should consider using the ACML libraries and pathscale compilers on Opteron systems and MKL and intel compilers on on intel hardware. ACML and MKL are vendor optimized libraries including blas and lapack routines. Refer to the ACML and MKL software pages for examples on their use.

Relocation overflow and/or truncated to fit errors

If you get "relocation overflow" and/or "relocation truncated to fit" errors when you compile big fortran 77 codes using pathf90 and/or ifort, then you should try the following:

(A) If the static data structures in your fortran 77 program are greater than 2GB you should try specifying the option -mcmodel=medium in your pathf90 or ifort command.

(B) Try running the code on a different system which has more memory:

   Other clusters that you can try are: requin or hound 

You would probably benefit from looking at the listing of all of the clusters:

https://www.sharcnet.ca/my/systems

and this page has a table showing how busy each one is:

https://www.sharcnet.ca/my/perf_cluster/cur_perf

How do I run a program?

In general, users are expected to run their jobs in "batch mode". That is, one submits a job -- the application problem -- to a queue through a batch queue command, the scheduler schedules the job to run at a later time and sends the results back once the program is finished.

In particular, one will use SQ command (see What is the batch job scheduling environment SQ? below) to launch a serial job foo

sqsub -o foo.log -r 5h ./foo

This means to submit the command foo as a job with a 5 hour runtime limit and put its standard output into a file foo.log (note that it is important to not put too tight of a runtime limit on your job as it may sometimes run slower than expected due to interference from other jobs).

If your program takes command line arguments, place the arguments after your program name just as when you run the program interactively

sqsub -o foo.log -r 5h ./foo arg1 arg2...

For example, suppose your program takes command line options -i input and -o output for input and output files respectively, they will be treated as the arguments of your program, not the options of sqsub, as long as they appear after your program in your sqsub command

sqsub -o foo.log -r 5h ./foo -i input.dat -o output.dat

To launch a parallel job foo_p

sqsub -q mpi -n num_cpus -o foo_p.log -r 5h ./foo_p

The basic queues on SHARCNET are:

queue usage
serial for serial jobs
mpi for parallel jobs using the MPI library
threaded for threaded jobs using OpenMP or POSIX threads

To see the status of submitted jobs, use command sqjobs.

How do I run a program interactively?

Several of the clusters now provide a collection of development nodes that can be used for this purpose. An interactively session can also be started by submitting a screen -D -m bash command as a job. If your job is a serial job, the submission line should be

sqsub -q serial -r 

Once the job begins running, figure out what compute node it has launched on

sqjobs

and then ssh to this node and attach to the running screen session

ssh -t <NODE> screen -r

You can access screens options via the ctrl+a key stroke. Some examples are ctrl+a ? to bring up help and ctrl+a a to send a ctrl+a. See the screen man page (man screen) for more information. The message Suddenly the Dungeon collapses!! - You die... is screen's way of telling you it is being killed by the scheduler (most likely because the time you specified for the job has elapsed). The exit command will terminate the session.

If your jobs is a MPI job, the submission line should be

sqsub -q mpi --nompirun -n <NODES> -r <TIME> screen -D -fn -m bash

Once the job starts, the screen sessions will be launch screen on the rank zero node. This may not be the lowest number node allocated, so you have to run

qstat -f -l <JOBID> | egrep exec_host

to find out what node it is (the first one listed). You can then proceed as in the non-mpi case. The command pbsdsh -o <COMMAND> can be used to run commands on all the allocated nodes (see the man pbsdsh), and the command mpirun <COMMAND> can be used to start MPI programs on the nodes.

What about running a program compiled on one cluster on another?

In general, if your program starts executing on a system other than the one it was compiled on, then there are likely no issues. However, you may want to compare results of test jobs just to make sure. The specific list of things to watch out for are

  1. using a particular compiler and/or optimizations,
  2. using a particular library (most frequently a specific MPI implementation), and
  3. using the /home filesystem because it is global.

In general, as long as very specific architecture optimizations are not being used (e.g., -march=native), you should be able to compile a program on one SHARCNET system and run it on others as most systems are binary compatible and the compiler runtime libraries are installed everywhere. In particular, this is true for our larger core systems and should be true for our other specialized systems as well---all of them except the Itanium systems (silky and bramble) are x86 based. It is worth noting that some compilers produce faster code on particular processors, and some compiler optimizations may not not work on all systems, so you may want to recompile in order to get the best performance. We actually have different default compilers on different systems (Intel on saw, Pathscale on the opteron systems). It is probably worth doing some comparisons on your own code because our tests show no clear winners.

With regard to MPI, and other libraries, you have to be a little more careful. All of the core systems have most of the same libraries and use HP-MPI by default, so programs should be able to run between each system without any modification (at the end of the day as long as the runtime libraries and the necessary dependencies are installed you shouldn't have any problems). However, some of the specialty systems use OpenMPI instead of HP-MPI and have different libraries, so you will have to recompile your program to use them there.

Another thing to watch out for is using /home because it is global. Because /home is global, it is slow and is not intended to be used as a working directory for running jobs. If your program writes to the local /work and /scratch filesystems on the compute clusters, and you submit the job from /work or /scratch (so that the stdout gets written there), then running the executable from /home should be fine. However, if it is ran from and/or writes to /home, then it will suffer a severe performance penalty. It's probably easiest to set up your working directory in /work and then just symlink to your binary in /home.

My application runs on Windows, can I run it on SHARCNET?

It depends. If your application is written in a high level language such as C, C++ and Fortran and is system independent (meaning it does not depend on any particular third party libraries that are available only for Windows), then you should be able to recompile and run your application on SHARCNET systems. However, if your application completely depends upon a special software for Windows, then you are out of luck. In general it is impossible to convert code at binary level between Windows and any of UNIX platforms.

My application runs on Windows HPC clusters, can I run it on SHARCNET clusters?

If your application does not use any Windows specific APIs then it should be able to recompile and run on SHARCNET UNIX/Linux based clusters.

My program needs to run for more than seven (7) days but user certification limits caps me at seven days of run-time; what can I do?

Although there is a higher level of user certification from the default (User1), this only affects how many processors you can consume simultaneously; the seven day run-time limit cannot be exceeded through higher levels of certification as all SHARCNET queues are globally capped at seven (7) days of run-time. This is done to primarily encourage the practice of checkpointing, but it also prevents users from monopolizing large amounts of resources outside of dedicated allocations with long running jobs, ensures that jobs free up nodes often enough for the scheduler to start large jobs in a modest amount of time, and allows us to drain all systems for maintenance within a reasonable time-frame.

In order to run a program that requires more than this amount of wall-clock time, you will have to make use of a checkpoint/restart mechanism so that the program can periodically save its state and be resubmitted to the queues, picking up from where it left off. It is crucial to store checkpoints so that one can avoid lengthy delays in obtaining results in the event of a failure. Investing time in testing and ensuring that one's checkpoint/resume works properly is inconvenient but ensures that valuable time and electricity are not wasted unduly in the long run. Redoing a long calculation is expensive.

Handling long jobs with chained job submission

Once you have ensured that your job can automatically resume from a checkpoint the best way conduct long simulations is to submit a chain of jobs, such that each subsequent job depends on the jobid before it. This will minimize the time your subsequent jobs will wait to run.

This can be done with the sqsub -w flag, eg.

    -w|--waitfor=jobid[,jobid...]]
                   wait for a list of jobs to complete

For example, consider the following instance where we want job #2 to start after job #1. We first submit job #1:

[snuser@bul131 ~]$ sqsub -r 10m -o chain.test hostname
WARNING: no memory requirement defined; assuming 1GB
submitted as jobid 5648719

Now when we submit job #2 we specify the jobid from the first job:

[snuser@bul131 ~]$ sqsub -r 10m -w 5648719 -o chain.test hostname
WARNING: no memory requirement defined; assuming 1GB
submitted as jobid 5648720

Now you can see that two jobs are queued, and one is in state "*Q" - meaning that it has conditions:

[snuser@bul131 ~]$ sqjobs
  jobid  queue state ncpus nodes time command
------- ------ ----- ----- ----- ---- -------
5648719 serial     Q     1     -  15s hostname
5648720 serial    *Q     1     -   2s hostname
2232 CPUs total, 1607 busy; 1559 jobs running; 1 suspended, 6762 queued.
403 nodes allocated; 154 drain/offline, 558 total.

Looking at the second job in detail we see that it will not start until the first job has completed with an "afterok" status:

[snuser@bul131 ~]$ qstat -f 5648720 | grep -i depend
    depend = afterok:5648719.krasched@krasched 
    -N hostname -l pvmem=1024m -m n -W depend=afterok:5648719 -l walltime=

In this fashion it is possible to string many jobs together. The second job (5648720) should continue to accrue priority in the queue while the first job is running, so once the first job completes the second job should start much more quickly than if it were submitted after the first job completed.

How do I checkpoint/restart my program?

Checkpointing is a valuable strategy that minimizes any waste of time and resources associated with program or node failure during a long-running job, and is effectively required for any program that requires more than seven (7) days of run-time as the all SHARCNET queues are globally capped at that duration. Assuming it is a serial or multi-threaded program (i.e. *not* MPI), you can make use of Berkeley Labs Checkpoint Restart software that is provided on the clusters. Documentation and usage instructions can be found on SHARCNET's BLCR software page. Note that BLCR requires your program to not be statically compiled (i.e., to use shared libraries).

If the program is MPI (or any other type of program requiring a specialized job starter to get it running), the program will have to be able to save state and restart from that state on its own. Please check the documentation that accompanies any software you are using to see what support it has for checkpointing. If the code has been written from scratch, you will need to build checkpointing functionality into it yourself---output all relevant parameters and state such that the program can be subsequently restarted, reading in those saved values and picking up where it left off.

How do I run a program remotely?

It is also possible to specify a command to run on the end of a ssh command. A command like ssh narwhal.sharcnet.ca sqjobs, however, will not work because ssh does not setup a full environment by default. In order to get the same environment you get as when you login, it is necessary to run the command under bash in login mode.

myhost$ ssh narwhal.sharcnet.ca bash -l -c sqjobs

If you wish to specify a command longer than a single word, it is necessary to quote it as the bash -c only takes a single argument. In order to pass these quotes through to ssh, however, it is necessary to escape them. Otherwise the local shell will interpret them and strip them off. An example is

myhost$ ssh narwhal.sharcnet.ca bash -l -c \' sqsub -r 5h ./myjob \'

Most problems with these commands are related to the local shell interpreting things that you wish to pass through to the remote side (e.g., stripping out any unescaped quotes). Use -v with ssh and set -x with bash to see what command(s) ssh and bash are executing respectively.

myhost$ ssh -v narwhal.sharcnet.ca bash -l -c \' sqsub -r 5h ./myjob \'
myhost$ ssh narwhal.sharcnet.ca bash -l -c \' set -x\; sqsub -r 5h ./myjob \'

Is package X preinstalled on system Y, and, if so, how do I run it?

The list of packages that SHARCNET has preinstalled on the various clusters, along with instructions on how to use them, can be found on the SHARCNET software page.

In the software page a package is available sometimes as the default, sometimes as a module. What is the difference?

We have implemented the Modules system for all supported software packages on our clusters - each version of each software package that we have installed can be dynamically loaded or unloaded in your user environment with the module command.

See Configuring your software environment with Modules for further information, including examples.

What is the batch job scheduling environment SQ?

SQ is a unified frontend for running jobs on SHARCNET, intended to hide unnecessary differences in how the clusters are configured. On clusters which are based on RMS, LSF+RMS, or Torque+Maui, SQ is just a thin shell of scripting over the native commands. On Wobbie, the native queuing system is called SQ.

To run a job, you use sqrun:

sqrun -n 16 -q mpi -r 5h ./foo

This runs foo as an MPI command on 16 processors with a 5 hour runtime limit (make sure to be somewhat conservative with the runtime limit as a job may run for longer than expected due to interference from other jobs). You can control input, output and error output using these flags:

sqrun -o outfile -i infile -e errfile -r 5h ./foo

this will run foo with its input coming from a file named infile, its standard output going to a file named outfile, and its error output going to a file named errfile. Note that using these flags is preferred over shell redirection, since the flags permit your program to do IO directly to the file, rather than having the IO transported over sockets, then to a file.

Often, especially with IO redirection as above, it is convenient to submit a job, and not wait for it to run. To do this, simply add a --bg switch to sqrun, or equivalently use sqsub. It makes no difference to the scheduler whether you run (wait to complete) or submit (batch mode).

For threaded applications (which use Pthreads, OpenMP, or fork-based parallelism), do this:

sqsub -q threaded -n 2 -r 5h ./foo

Serial jobs require no flags beyond the runtime

sqrun -r 5h ./foo

but you can provide IO redirection flags if you wish.

How do I check running jobs and control jobs under SQ?

To show your jobs, use "sqjobs". by default, it will show you only your own jobs. with "-a" or "-u all", it will show all users. similarly, "-u someuser" will show jobs only for this particular user.

the "state" listed for a job is one of the following:

  • Q - queued
  • R - running
  • Z - suspended (sleeping)
  • C - completed (shown briefly on some systems)
  •  ? - unknown (something is wrong, such as a node crashing)

times shown are the amount of time since being submitted (for queued jobs) or starting (for all others).

To kill, suspend or resume your jobs, use sqkill/suspend/resume with the job ID as shown by sqjobs.

Note also that providing the -v switch to sqrun/sqsub will print the jobid at submission time.

How do I translate my LSF command to SQ?

SQ very strongly resembles LSF commands such as bsub. For instance, here are two versions, the first assuming LSF, the second using SQ:

bsub -q mpi -n 16 -o term.out ./ParTISUN
sqsub -q mpi -n 16 -o term.out ./ParTISUN

There are some differences:

  • SQ doesn't have static queues like LSF. Instead, the "-q" simply describes the kind of job - MPI(parallel), threaded or serial. "test" is considered a modifier of the job type.
  • sqjobs is similar to bjobs.
  • sqkill/suspend/resume are similar to bkill/suspend/resume.

How can I submit jobs that will run where ever there are free cpus?

We are working on a new mechanism to provide this capability.

Command 'top' gives me two different memory size (virt, res). What is the difference between 'virtual' and 'real' memory?

'virt' refers to the total virtual address space of the process, including virtual space that has been allocated but never actually instantiated, including memory which was instantiated but has been swapped out, and memory which may be shared. 'res' is memory which is actually resident - that is, instantiated with real ram pages. resident memory is normally the more meaningful value, since it may be judged relative to the memory available on the node. (recognizing, of course, that the memory on a node must be divided among the resident pages for all the processes, so an individual thread must always strive to keep its working set a little smaller than the node's total memory divided by the number of processors.)

there are two cases where the virtual address space size is significant. one is when the process is thrashing - that is, has a working set size bigger than available memory. such a process will spend a lot of time in 'D' state, since it's waiting for pages to be swapped in or out. a node on which this is happening will have a substantial paging rate expressed in the 'si' column of output from vmstat (the 'so' column is normally less significant, since si/so do not necessarily balance.)

the second condition where virtual size matters is that the kernel does not implement RLIMIT_RSS, but does enforce RLIMIT_AS (virtual size). we intend to enforce a sanity-check RLIMIT_AS, and in some cases do. the goal is to avoid a node becoming unusable or crashing when a job uses too much memory. current settings are very conservative, though - 150% of physical memory.

in this particular case, the huge V size relative to R is almost certainly due to the way Silky implements MPI using shared memory. such memory is counted as part of every process involved, but obviously does not mean that N * 26.2 GB of ram is in use.

in this case, the real memory footprint of the MPI rank is 1.2 GB - if you ran the same code on another cluster which didn't have numalink shared memory, both resident and virtual sizes would be about that much. since most of our clusters have at least 2GB per core, this code could run comfortably on other clusters.

Can I use a script to compile and run programs?

Yes. For instance, suppose you have a number of source files main.f, sub1.f, sub2.f, ..., subN.f, to compile these source code to generate an executable myprog, it's likely that you will type the following command

f77 main.f sub1.f sub2.f ... sub N.f -llapack -o myprog 

Here, the -o option specifies the executable name myprog rather than the default a.out and the option -llapack at the end tells the compiler to link your program against the LAPACK library, if LAPACK routines are called in your program. If you have long list of files, typing the above command every time can be really annoying. You can instead put the command in a file, say, mycomp, then make mycomp executable by typing the following command

chmod +x mycomp

Then you can just type

./mycomp

at the command line to compile your program.

This is a simple way to minimize typing, but it may wind up recompiling code which has not changed. A widely used improvement, especially for larger/many source files, is to use make. make permits recompilation of only those source files which have changed since last compilation, minimizing the time spent waiting for the compiler. On the other hand, compilers will often produce faster code if they're given all the sources at once (as above).

I get errors trying to redirect input into my program when submitted to the queues, but it runs fine if run interactively

The standard method to attach a file as the input to a program when submitting to SHARCNET queues is to use the -i flag to sqsub, e.g.:

sqsub -q serial -i inputfile.txt ...

Occasionally you will encounter a situation where this approach appears not to work, and your program fails to run successfully (reasons for which can be very subtle). Here is an example of one such message that was being generated by a FORTRAN program:

lib-4001 : UNRECOVERABLE library error 
    A READ operation tried to read past the end-of-file.

Encountered during a list-directed READ from unit 5 
Fortran unit 5 is connected to a sequential formatted text file 
    (standard input). 
/opt/sharcnet/sharcnet-lsf/bin/sn_job_starter.sh: line 75: 25730 Aborted (core dumped) "$@"

yet if run on the command line, using standard shell redirection, it works fine, e.g.:

program < inputfile.txt

Rather than struggle with this issue, there is an easy workaround: instead of submitting the program directly, submit a script that takes the name of the file for input redirection as an argument, and have that script launch your program making use of shell redirection. This circumvents whatever issue the scheduler is having by not having to do the redirection of the input via the submission command. The following shell script will do this (you can copy this directly into a text file and save it to disk; the name of the file is arbitrary but we'll assume it to be exe_wrapper.sh).

Bash Shell script: exe_wrapper.sh
#!/bin/bash
 
EXENAME=replace_with_name_of_real_executable_program
 
if (( $# != 1 )); then
        echo "ERROR: incorrect invocation of script"
        echo "usage: ./exe_wrapper.sh <input_file>"
        exit 1
fi
 
./${EXENAME} < ${1}

Note that you must edit the EXENAME variable to reference the name of the actual executable, and can be easily modified to take or provide additional arguments to the program being executed as desired. Ensure the script is executable by running chmod +x exe_wrapper.sh. You can now submit the job by submitting the *script*, with a single argument being the file to be used as input, i.e:

sqsub -q serial -r 5h -o outputfile.log ./exe_wrapper.sh intputfile.txt

This will result in the job being run on a compute node as if you had typed:

./program < inputfile.txt

NOTE: this workaround, as provided, will only work for serial programs, but can be modified to work with MPI jobs by further leveraging the --nompirun option to the scheduler, and launching the parallel job within the script using mpirun directly. This is explained below.

How do I submit an MPI job such that it doesn't automatically execute mpirun?

This can be done by using the --nompirun flag when submitting your job with sqsub. By default, MPI jobs submitted via sqsub -q mpi are expected to be MPI programs, and the system automatically launches your program with mpirun. While this is convenient in most cases, some users may want to implement pre or post processing for their jobs, in which case they may want to encapsulate their MPI job in a shell script.

Using --nompirun means that you have to take responsibility for providing the correct MPI launch mechanism, which depends on the scheduler as well as the MPI library in use. You can actually see what the system default is by running sqsub -vd ....

system mpi launch prefix
most /opt/hpmpi/bin/mpirun -srun
newer /opt/sharcnet/openmpi/current/intel/bin/mpirun

Most of our systems (Saw, Narwhal, Requin, and others which run XC, LSF-slurm and HP-MPI) use the first form. Our newer-generation systems (which include Orca, Hound, Goblin, Angel, Brown and others) are based on Centos, Torque/Maui/Moab and OpenMPI.

The basic idea is that you'd write a shell script (eg. named mpi_job_wrapper.x) to do some actions surrounding your actual MPI job (using requin as an example here):

#!/bin/bash
echo "hello this could be any pre-processing commands"
/opt/hpmpi/bin/mpirun -srun ./mpi_job.x
echo "hello this could be any post-processing commands"

You would then make this script executable with:

chmod +x mpi_job_wrapper.x

and submit this to run on 4 cpus for 7 days with job output sent to wrapper_job.out:

sqsub -r 7d -q mpi -n 4 --nompirun -o wrapper_job.out ./mpi_job_wrapper.x

now you would see the following output in ./wrapper_job.out:

hello this could be any pre-processing commands
<any output from the MPI job>
hello this could be any post-processing commands

On newer clusters (e.g., orca), due to the extreme spread of memory and cores across sockets/dies, getting good performance requires binding your processes to cores so they don't wander away for the local resource they start using. The mpirun flags --bind-to-core and --cpus-per-proc are for this. If sqsub -vd ... shows these flags, make sure to duplicate them in your own scripts. If it does not show them, do not use them. They require special scheduler support, and without this, your process will windup bound to cores other jobs are using

There are a number of reasons NOT to use your own scripts as well: with --nompirun, your job will have allocated a number of cpus, but the non-MPI portions of your script will run serially. This wastes cycles on all but one of the processors - a serious concern for long serial sections and/or jobs with many cpus. "sqsub --waitfor" provides a potentially more efficient mechanism for chaining jobs together, since it permits a hypothetical serial post-processing step to allocate only a single CPU.

But this also brings up another use-case: your --nompirun script might also consist of multiple MPI sub-jobs. For instance, you may have chosen to break up your workflow into two separate MPI programs, and want to run them successively. You can do this with such a script, including possible adjustments, perhaps to output files, between the two MPI programs. Some of our users have done iterative MPI jobs this way, were an MPI program is run, then its outputs massaged or adjusted, and the MPI program run again. Strictly speaking, you can do whatever you want with the resources you allocate as part of a job - multiple MPI subjobs, serial sections, etc.

Some cases need to know the allocated node names and the number of cpus on the node in order to construct its own hostfile or so. This is possible by using '$LSB_MCPU_HOSTS' environment variable. You may insert lines below into your bash script

echo $LSB_MCPU_HOSTS 
arr=($LSB_MCPU_HOSTS)
echo "Hostname= ${arr[0]}"
echo "# of cpus= ${arr[1]}"

Then, you may see

bru2 4
Hostname= bru2
# of cpus= 4

in your output file. Utilizing this, you can construct your own hostfile whenever you submit your job.

The following example shows a job wrapper script (eg. ./mpi_job_wrapper.x ) that translates an LSF job layout to an OpenMPI hostfile, and launches the job on the nodes in a round robin fashion:

 #!/bin/bash
 echo 'hosts:' $LSB_MCPU_HOSTS
 arr=($LSB_MCPU_HOSTS)
 if [ -e ./hostfile.$$ ]
 then
                 rm -f ./hostfile.$$
 fi
 for (( i = 0 ; i < ${#arr[@]}-1 ; i=i+2 ))
 do
                 echo ${arr[$i]} slots=${arr[$i+1]} >> ./hostfile.$$
 done
 /opt/sharcnet/openmpi/current/intel/bin/mpirun -np 2 -hostfile ./hostfile.$$ -bynode ./a.out

Note that one would still have to set the desired number of process in the final line (in this case it's only set to 2). This could serve as a framework for developing more complicated job wrapper scripts for OpenMPI on the XC systems.

If you are having issues with using --nompirun we recommend that you submit a problem ticket so that staff can help you figure out how it should be utilized on the particular system you are using.

How do I submit a large number of jobs with a script?

There are two methods: you can pack a large number of runs into a single submitted job, or you can use a script to submit a large number of jobs to the scheduler.

With the first method, you would write a shell script (let us call it start.sh) similar to the one found above. On requin with the older HP-MPI it would be something like this:

#!/bin/csh
/opt/hpmpi/bin/mpirun -srun ./mpiRun1 inputFile1
/opt/hpmpi/bin/mpirun -srun ./mpiRun2 inputFile2
/opt/hpmpi/bin/mpirun -srun ./mpiRun3 inputFile3
echo Job finishes at `date`.
exit

On orca with OpenMPI the script would be (note that the number of processors should match whatever you specify with sqsub):

#!/bin/bash
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun1
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun2
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun3

Then you can submit it with:

sqsub -r 7d -q mpi -n 4 --nompirun -o outputFile ./start.sh

Your mpi runs (mpiRun1, mpiRun2, mpiRun3) will run one at a time, using all available processors within the job's allocation, i.e. whatever you specify with the -n option in sqsub. Please be aware of the total execution time for all runs, as with a large number of jobs it can easily exceed the maximum allowed 7 days, in which case the remaining runs will never start.

With the second method, your script would contain sqsub inside it. This approach is described in Managing a Large Number of Jobs and in more detail in Throughput Computing.

How do I submit a job to run on a specific node?

Sometimes there is a need to submit a job to a specific node(s), e.g. if a cluster has a variety of interconnects, and you want to run your job on a specific interconnect which is wired to specific nodes. Then you would use a command

sqsub -q mpi -r 5m -n 16 -N 2 --nodes=saw[18-19] -o output.log ./yourExecutable

where the range of nodes is specified with --nodes, and the number of cores (-n) should match the number of nodes (-N). For example, on saw there are 8 cores per node. If you want to submit your job to one specific node, then the command would be

sqsub -q threaded -r 5m -n 8 -N 1 --nodes=saw18 -o output.log ./yourExecutable

Note that the nodes you want might not be available for a while, so the job is likely to wait longer in the queue.

I have a program that runs on my workstation, how can I have it run in parallel?

If the the program was written without parallelism in mind, then there is very little that you can do to run it automatically in parallel. Some compilers are able to translate some serial portion of a program , such as loops, into equivalent parallel code, which allows you to explore the potential architecture found mostly in symmetric multiprocessing (SMP) systems. Also, some libraries are able to use parallelism internally, without any change in the user's program. For this to work, your program needs to spend most of its time in the library, of course - the parallel library doesn't speed up your program itself. Examples of this include threaded linear algebra and FFT libraries.

However, to gain the true parallelism and scalability, you will need to either rewrite the code using the message passing interface (MPI) library or annotate your program using OpenMP directives. We will be happy to help you parallelize your code if you wish. (Note that OpenMP is inherently limited by the size of a single node or SMP machine - most SHARCNET resources

Also, the preceding answer pertains only to the idea of running a single program faster using parallelism. Often, you might want to run many different configurations of your program, differing only in a set of input parameters. This is common when doing Monte Carlo simulation, for instance. It's usually best to start out doing this as a series of independent serial jobs. It is possible to implement this kind of loosely-coupled parallelism using MPI, but often less efficient and more difficult.

How can I have a quick test run of my program?

Debugging and development often require the ability to quickly test your program repeatedly. At SHARCNET we facilitate this work by providing a pre-emptive testing queue on each of our clusters, and a set of interactive development nodes on the larger clusters.

The test queue is not available on all systems and we recommended using it in most cases as it is convenient and prepares one for eventually working in the production environment. Development nodes allow one to work interactively with their program outside of the job scheduling system and production environment, but we only set aside a limited number of them on the larger clusters. The rest of this section will only address the test queue, for more information on development nodes see the Kraken or Orca cluster pages.

The test queue allows one to quickly test their program in the job environment to ensure that the job will start properly, and can be useful for debugging. It also has the benefit that it will allow you to debug any size of job. Do not abuse the test queue as it will have an impact on your fairshare job scheduling priority and it has to interrupt other user's production jobs temporarily, slowing down other users of the system.

Note that the flag for submitting to the test queue is provided in addition to the regular queue selection flag. If you are submitting a MPI job to the test queue, both -q mpi and -t should be provided. If you omit the -q flag, you may get odd errors about libraries not being found, as without knowing the type of job, the system simply doesn't know how to start your program correctly.

To perform a test run, use sqsub option --test or -t. For example, if you have an MPI program mytest that uses 8 processors, you may use the following command

sqsub --test -q mpi -n 8 -o mytest.log ./mytest

The only difference here is the addition of the "--test" flag (note -q appears as would be normal for the job). The scheduler will normally start such test jobs within a few seconds.

The main purpose of the test queue is quickly verify the startup of a changed job - just to test that for a real, production run, it won't hit a bug shortly after starting due to, for instance, missing parameters.

The "test queue" only allows a job to run for a short period of time (currently 1 hour), therefore you must make sure that your test run will not take longer than this to finish. Only one test job may be run at a time. In addition, the system monitors the user submissions and decreases the priority of submitted jobs over time within an internally defined time window. Hence if you keep submitting jobs as test runs, the waiting time before those jobs get started will be getting longer, or you will not be able to submit test jobs any more. Test jobs are treated as "costing" four times as much as normal jobs.

Which system should I choose?

There are many clusters, many of them specialized in some way. We provide an interactive map of SHARCNET systems on the web portal which visually presents a variety of criteria as a decision making aid. In brief however, depending on the nature of your jobs, there may be a clear preference for which cluster is most appropriate:

is your job serial?
Kraken is probably the right choice, since it has a very large number of processors, and consequently has high throughput. Your job will probably run soonest if you submit it here.
do you use a lot of memory?
Orca or Hound is probably the right choice.
does your MPI program utilize a lot of communication?
Orca, Saw, Requin, Hound, Rainbow, Angel, Monk and Brown have the fastest networks, but it's worth trying Kraken if you aren't familiar with the specific differences between Quadrics, Myrinet and Infiniband.
does your job (or set of jobs) do a lot of disk IO?
you probably want to stick to one of the major clusters (Narwhal/Requin/Saw) which have bigger and much faster (parallel) filesystems.

Where can I find available resources?

The information about available computational resources are available to the public on SHARCNET web at: our systems page and our cluster performance page.

The change of status of each system, such as down time, power outage, etc is announced through the following three different channels:

  • Web links under systems. You need to check the web site from time to time in order to catch such public announcements.
  • System notice mailing list. This is the passive way of being informed. You receive the notices in e-mail as soon as they are announced. But some people might feel it is annoying to be informed. Also, such notices may be buried in dozens or hundreds of other e-mail messages in your mail box, hence are easily ignored.
  • SHARCNET RSS broadcasting. A good analogy of RSS is like traffic information on the radio. When you are on a road trip and you want to know what the traffic conditions are ahead, you turn on the car radio, tune-in to a traffic news station and listen to updates periodically. Similarly, if you want to know the status of SHARCNET systems or the latest SHARCNET news, events and workshops, you can turn to RSS feeds on your desktop computer.

The term RSS may stand for Really Simple Syndication, RDF Site Summary, or Rich Site Summary depending on the version. Written in the format of XML, RSS feeds are used by websites to syndicate their content. RSS feeds allow you to read through the news you want, at your own convenience. The messages will show up on your desktop, e.g. using Mozilla Thunderbird, an integrated mail client software, as soon as there is an update.

Can I find my job submission history?

Yes. Your every single job submission is recorded in a database. Each record contains the command, the submission time, the start time, the completion time, exit status of your program (i.e. succeeded or failed), number of CPUs used, system, and so on.

You may review the history by logging in to your web account.

How are jobs scheduled?

Job scheduling is the mechanism which selects waiting jobs ("queued") to be started ("dispatched") on nodes in the cluster. On all of the major SHARCNET production clusters, resources are "exclusively" scheduled so that a job will have complete access to the CPUs, GPUs or memory that it is currently running on (it may be pre-empted during the course of it's execution, as noted below). Details as to how jobs are scheduled follow below.

How long will it take for my queued job to start?

In practice, if your potential job does not cause you to exceed your user certification per-user process limit and there are enough free resources to satisfy the processor and memory layout you've requested for your job, and no one else has any jobs queued, then you should expect your jobs to start immediately. Once there are more jobs queued than available resources, the scheduler will attempt to arbitrate between the CPU demands of all queued jobs. This arbitration happens in the following order: Dedicated Resource jobs first, then "test" jobs (which may also preempt normal jobs), and finally normal jobs. Within the set of pending normal jobs, the scheduler will prefer jobs belonging to groups which have high Fairshare priority (see below).

For information on expected queue wait times, users can check the Recent Cluster Statistics table in the web portal. This is historical data and may not correspond to the current job load on the cluster, but it is useful for identifying longer-term trends. The idea is that if you are waiting unduly long on a particular cluster for your jobs to start, you may be able to find another similar cluster where the waittime is shorter.

Another way to minimize your queue waittime is to submit smaller jobs. Typically it is harder for the scheduler to free up resources for larger jobs (in terms of number of cpus, number of nodes, and memory per process), and as such smaller jobs do not wait as long in the queue. The best approach is to measure the scaling efficiency of your code to find the sweet spot where your job finishes in a reasonable amount of time but waits for the least amount of time in the queue. Please see this tutorial for more information on parallel scaling performance and how to measure it effectively.

What determines my job priority relative to other groups?

The priority of different jobs on the systems is ranked according to the usage by the entire group, across SHARCNET. This system is called Fairshare.

Fairshare is based on a measure of recent (currently, past 2 months) resource usage. All user groups are ranked into 5 priority levels, with the heaviest users given lowest priority. You can examine your group's recent usage and priority here: Research Group's Usage and Priority.

This system exists to allow for new and/or light users to get their jobs running without having to wait in the queue while more resource consuming groups monopolize the systems.

Why did my job get suspended?

Sometimes your job may appear to be in a running state, yet nothing is happening and it isn't producing the expected output. In this case the job has probably been suspended to allow another job to run in it's place briefly.

Jobs are sometimes preempted (put into a suspended state) if another higher-priority job must be started. Normally, preemption happens only for "test" jobs, which are fairly short (always less than 1 hour). After being preempted, a job will be automatically resumed (and the intervening period is not counted as usage.)

On contributed systems, the PI who contributed equipment and their group have high-priority access and their jobs will preempt non-contributor jobs if there are no free processors.

My job cannot allocate memory

The default memory is usually 2G on most clusters. If your job requires more memory and is failing with a message "Cannot allocate memory", you should try adding the ""--mpp=4g" flag to your sqsub command, with the value (in this case 4g - 4 gigabytes) set large enough to accommodate your job.

Some specific scheduling idiosyncrasies:

One problem with cluster scheduling is that for a typical mix of job types (serial, threaded, various-sized MPI), the scheduler will rarely accumulate enough free CPUs at once to start any larger job. When an job completes, it frees N cpus. If there's an N-cpu job queued (and of appropriate priority), it'll be run. Frequently, jobs smaller than N will start instead. This may still give 100% utilization, but each of those jobs will complete, probably at different times, effectively fragmenting the N into several smaller sets. Only a period of idleness (lack of queued smaller jobs) will allow enough cpus to collect to let larger jobs run.

Requin is intended to enable "capability", or very large jobs. Rather than eliminating the ability to run more modest job sizes, Requin is configured with a weekly cycle: every Monday at noon, all previously running jobs will have finished and large queued jobs can start. One implication of this is that no job over 1 week can be run (and a 1-week job will only have one chance per week to start). Shorter jobs can be started at any time, but only a 1-day job can be started on Sunday, for instance.

Note that all clusters now enforce runtime limits - if the job is still running at the end of the stated limit, it will be terminated. (Before December 1 2008, only Narwhal would enforce runtime limits.) Note also that when a job is suspended (preempted), this runtime clock stops: suspended time doesn't count, so it really is a limit on "time spent running", not elapsed/wallclock time.

Finally, when running DDT or OPT (debugger and profiler), it's normal to use the test queue. If you need to run such jobs longer than 1 hour, and find the wait times too high when using the normal queues, let us know (open a ticket). It may be that we need to provide a special queue for these uses - possibly preemptive like the test queue.

How do I run the same command on multiple clusters simultaneously?

If you're using bash and can login to sharcnet with authentication agent connection forwarding (the -A flag; ie. you've set up ssh keys; see Choosing_A_Password#Use_SSH_Keys_Instead.21 for a starting point) add the following environment variable and function to your ~/.bashrc shell configuration file:

~/.bashrc configuration: multiple cluster command
export SN_CLUSTERS="bala bruce coral dolphin goblin gulper mako megaladon narwhal requin silky spinner tiger wobbie zebra"
 
function clusterExec {
  for clus in $SN_CLUSTERS; do
     ping -q -w 1 $clus &> /dev/null
     if [ $? = "0" ]; then echo ">>> "$clus":"; echo ""; ssh $clus ". ~/.bashrc; $1"; else echo ">>> "$clus down; echo ""; fi
   done
}

You can select the relevant systems in the SN_CLUSTERS environment variable.

To use this function, reset your shell environment (ie. log out and back in again), then run:

clusterExec uptime

You will see the uptime on the cluster login nodes, otherwise the cluster will appear down.

If you have old host keys (not sure why these should change...) then you'll have to clean out your ~/.ssh/known_hosts file and repopulate it with the new keys. If you suspect a problem contact an administrator for key validation or email help@sharcnet.ca. For more information see Knowledge_Base#SSH_tells_me_SOMEONE_IS_DOING_SOMETHING_NASTY.21.3F.


How do I load different modules on different clusters?

SHARCNET provides environment variables named $CLUSTER, which is the systems hostname (without sharcnet.ca), as well as $CLU which will resolve to a three-character identifier that is unique for each system (typically the first three letters of the clusters name). You can use these in your ~/.bashrc to only load certain software on a particular system, but not others. For example, you can create a case statement in your ~/.bashrc shell configuration file based on the value of $CLUSTER :


~/.bashrc configuration: loading different modules on different systems
case `echo $CLUSTER` in
  orca)
  #load intel v11.1.069 when on orca instead of the default
  	module unload intel
  	module load intel/11.1.069 
  ;;
  mako)
  #alias vim to vi on mako, as the former isn't installed
    alias vim=vi
  ;;
  *)
    #Anything we want to end up in "other" here....
  ;;
esac

One can use $CLU as it is shorter and more convenient. Instead of a case statement one can just conditionally load or unload certain software on particular systems by inserting lines like the following in their ~/.bashrc :

~/.bashrc configuration: loading different modules on different systems
#to load gromacs only on saw: 
if [ $CLU == 'saw' ]; then module load gromacs; fi 
#to load octave on any system except saw: 
if [ $CLU != 'saw' ]; then module load octave; fi

I can't run jobs because I'm overquota?

If you exceed your disk quota on our systems you will be placed into a special "overquota" group and will be unable to run jobs. SHARCNET's disk monitoring system runs periodically (typically O(day)) so if you have just cleaned up your files you may have to wait until it runs again to update your quota status. One can see their current quota status from the system's point of view by running:

 quota $USER

If you can't submit jobs even after the system has updated your status it is likely because you are logged into an old shell which still shows you in the overquota unix group. Log out and back in again and then you should be able to submit jobs.

If you're cleaning up and not sure how much space you are using on a particular filesystem, then you will want to use the du command, eg.

 du -h --max-depth=1 /work/$USER

This will count space used by each directory in /work/$USER and the total space, and present it in a human-readable format.

Programming and Debugging

What is MPI?

MPI stands for Message Passing Interface, a standard for writing portable parallel programs which is well-accepted in the scientific computing community. MPI is implemented as a library of subroutines which is layered on top of a network interface. The MPI standard has provided both C/C++ and Fortran interfaces so all of these languages can use MPI. There are several MPI implementations, including OpenMPI and MPICH. Specific high-performance interconnect vendors also provide their own libraries - usually a version of MPICH layered on an interconnect-specific hardware library. For SHARCNET Alpha clusters, the interconnect is Quadrics, which provides MPI and a low-level library called "elan". for Myrinet, the low-level library is MX or GM.

For an MPI tutorial refer to MPI tutorial.

In addition to C/C++ and Fortran versions of MPI, there exist other language bindings as well. If you have any special needs, please contact us.

What is OpenMP?

OpenMP is a standard for programming shared memory systems using threads with compiler directives instrumented in the source code. It provides a higher-level approach to utilizing multiple processors within a single machine while keeping the structure of the source code as close to the conventional form as possible. OpenMP is much easier to use than the alternative (Pthreads) and thus is suitable for adding modest amounts of parallelism to pre-exiting code. Because OpenMP is a set of programs, your code can still be compiled by a serial compiler and should still behave the same.

OpenMP for C/C++ and Fortran are supported by many compilers, including the PathScale and PGI for Opterons, and the Intel compilers for IA32 and IA64 (such as SGI's Altix.). OpenMP support has been provided in the GNU compiler suite since v4.2 (OpenMP 2.5), and starting with v4.4 supports the OpenMP 3.0 standard.

How do I run an OpenMP program with multiple threads?

An OpenMP program uses a single process with multiple threads rather than multiple processes. On SMP systems, threads will be scheduled on available processors, thus run concurrently. In order for each thread to run on one processor, one needs to request the same number of CPUs as the number of threads to use. This is done differently on different systems at SHARCNET where queueing systems are used. For instance, on Tru64 Alpha clusters, to run an OpenMP program foo that uses four threads with sqsub command, use the following

sqsub -q threaded -n 4 ./foo

The option -n 4 specifies to reserve 4 CPUs per process. The same command applies to all systems which support sqsub (SQ).

For a basic OpenMP tutorial refer to OpenMP tutorial.

What mathematics libraries are available?

Every system has the basic linear algebra libraries BLAS and LAPACK installed. Normally, these interfaces are contained in vendor-tuned libraries. On Intel-based (Xeon, Itanium2) clusters, users have the access to Intel math kernel library. On Opteron-based clusters, AMD's ACML library is available.

One may also find the GNU scientific library (GSL) useful to some point for their particular needs. The GNU scientific library is an optional package, available on any machine.

For a detailed list of libraries on each clusters, please check the documentation on the corresponding SHARCNET satellite web sites

How do I use mathematics libraries such as BLAS and LAPACK routines?

First you need to know which subroutine you want to use. You need to check the references to find what routines meet your needs. Then place calls to those routines you want in your program and compile your program to use the particular libraries that have those routines. For instance, if you want compute the eigenvalues, and optionally the eigenvectors, of an N by N real non symmetric matrix in double precision, you find the LAPACK routine DGEEV will do that. All you need to do is to have a call to DGEEV, with required parameters as specified in the LAPACK document, and compile your program to link against the LAPACK library.

f77 -o myprog main.f sub1.f sub2.f ... sub13.f -llapack

The option -llapack tells the compiler to use library liblapack.a.

If the system you are using has vendor supplied libraries that have optimized LAPACK routines, such as Intel's math kernel library MKL (libmkl.a) or AMD's ACML library (libacml.a), then use those libraries with options -lmkl or -lacml instead, as they will give you better performance. The installation directories of those vendor libraries may vary from site to site. If such a library is not installed in the standard directory /lib, /usr/lib or /usr/local/lib, then chances are you would have to specify the lookup path for the compiler. For instance, on the Itanium2 cluster Spinner, the Intel version of LAPACK in the math kernel library mkl is located in /opt/intel/mkl/lib/64, in the above example, one will use command

ifort -o myprog main.f sub1.f sub2.f ... sub13.f -L/opt/intel/mkl/lib/64 -lmkl_lapack

where ifort is the Intel Fortran compiler, the option -L/opt/intel/mkl/lib/64 -lmkl_lapack specifies the library path. Please check the local documentation at each site for details.

You should never need to copy or use the individual source code of those library routines and compile them together with your program.

My code is written in C/C++, can I still use those libraries?

Yes. Most of the libraries have C interfaces. If you are not sure about the C interface or you need assistance in using those libraries written in Fortran, we can help you out on a case to case basis.

What packages are available?

Various packages have been installed on SHARCNET clusters at users' requests. Custom installed packages include, for example, Gaussian, PETSc, R, Featflow, Gamess, Tinker, Rasmol, and Maple. Please check the SHARCNET web portal for the software packages installed and related usage information.

What interconnects are used on SHARCNET clusters?

Currently, several different interconnects are being used on SHARCNET clusters: Quadrics, Myrinet, InfiniBand and standard IP-based ethernet.

I would like to do some grid computing, how should I proceed?

Depends on what you mean by "grid computing". If you simply mean you want to queue up a bunch of jobs (MPI, threaded or serial) and have them run without further attention, then great! SHARCNET's model is exactly that kind of grid. However, we do not attempt to hide differences between clusters, such as file systems that are remote, different types of CPUs or interconnect. We do not currently attempt to provide a single queue which feeds jobs to all of the clusters. Such a unified grid would require you to ensure that your program was compiled and configured to run under Alpha Linux, Alpha Tru64, IA32 Linux, IA64 Linux, AMD64 Linux. It would also have to assume nothing about shared file systems, and it would have to be aware of the 5000x difference in latency when sending messages within a cluster versus between clusters, as well as either rely on least-common-denominator networking (ethernet) or else explicitly manage the differences between Quadrics, Myrinet, Infiniband and ethernet.

If, however, you would like to try something "unusual" that requires much more freedom than the current resource management system can handle, then, you would need to discuss the details of your plan with us for special arrangement.

Debugging serial and parallel programs

Debugger is a program which helps to identify mistakes ("bugs") in programs - either run-time, or "post-mortem" (by analyzing the core file produced by a crashed program). Debuggers can be either command-line, or GUI (graphical user interface) based. Before a program can be debugged, it needs to be (re-)compiled with a switch, -g, which tells the compiler to include symbolic information into the executable. For MPI problems on the HP XC clusters, -ldmpi includes the HP MPI diagnostic library, which is very helpful for discovering incorrect use of the API.

SHARCNET highly recommends using our commercial debugger DDT. It has a very friendly GUI, and can also be used for debugging serial, threaded, and MPI programs. A short description of DDT and cluster availability information can be found on its software page. Please also refer to our detailed Parallel Debugging with DDT PDF tutorial.

SHARCNET also provides gdb (installed on all clusters, type "man gdb" to get a list of options and see our Common Bugs and Debugging with gdb tutorial), pathdb (Optron clusters), and idb (Silky) are the command-line-based ones. The idb debugger also has a GUI (run "idb -gui").

How do I kill hungup processes

Refer to killing hungup processes to kill hung up processes.

What if I do not want a core dump

When you submit a batch job to the test queue it automatically will produce a core dump in the event that a Segmentation fault occurs.

This is controlled by the "ulimit -c" option. If you do not want a core dump you have to submit a script and in that script, specify

"ulimit -c 0"

For illustration purposes consider following simple program, residing in file simple.c:

#include <stdio.h>
 
main() {
    int i;
    int array[10];
 
    i = 500000000;
    printf("Index i = %d\n", i);
    printf("%d\n",array[i]);
}

We compile above program using the command:

   gcc -g simple.c

which produces the executable a.out and then submit following script to execute the job in batch mode in the test queue:

   ./sub_job


where the sub_job script file is as follows:

#!/bin/bash
 
sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./a.out

Above procedure produces an output file which start with following lines:

srun: error: nar315: task0: Segmentation fault (core dumped) srun: Terminating job


and a core dump file was produced.


In the case where the program is large the core file would also be very large and would take a lot of space and time to be dumped.

So, for those cases where you do not want or need the core file, you should submit the script sub_job_no_core_dump as follows:

    ./sub_job_no_core_dump

where sub_job_no_core_dump is the following:

#!/bin/bash
 
sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./ulimit_script

and where ulimit_script is another script:

#!/bin/bash
 
ulimit -c 0
./a.out

This time the output from ./sub_job_no_core_dump is as follows:

~/ulimit_script: line 4: 31097 Segmentation fault ./a.out srun: error: nar150: task0: Exited with exit code 139


and no core dump file was produced.

Note: All scripts must have the proper authorizations, which is obtained by issuing command:

     chmod ugo+rx <script_name>


What is NaN ?

NaN stands for "Not a Number". It is an undefined or unrepresentable value, typically encountered in floating point arithmitic (eg. the square root of a negative number). To debug this in your program one typically has to unmask or trap floating point exceptions. There are further details in the Common Bugs and Debugging with gdb tutorial.

How can I use double precision for GPU variables (on cluster angel) ?

To use double precision for CUDA variables you need to add the following flag to the compile command:

-arch sm_13

For further information on using CUDA please see this tutorial / online reference.

How do I compile my MPI program with diagnostic information?

HP-MPI

This is the version of MPI used by default on the opteron clusters running the XC operating system. To get diagnostic info (useful for solving "MPI BUG" errors) compile your code with the additional -ldmpi flag.

My program exited with an error code XXX - what does it mean?

Your application crashed, producing an error code XXX (where XXX is a number). What does it mean? The answer may depend on your application. Normally, user codes are not touching the first 130 or so error codes, which are reserved for the Operational System level error codes. On most of our clusters, typing

 perror  XXX

will print a short description of the error. (This is a MySQL utility, and for XXX>122 it will start printing only MySQL-related error messages.) The accurate for the current OS (operational system) list of system error codes can be found on our clusters by printing the content of the file /usr/include/asm-x86_64/errno.h (/usr/include/asm-generic/errno.h on some systems).

When the error code is returned by the scheduler (when your program submitted to the scheduler with "sqsub" crashes), it has a different meaning. Specifically, if the code is less or equal to 128, it is the scheduler (not application's) error. Such situations have to be reported to SHARCNET staff. Scheduler exit codes between 129 and 255 are user job error codes; one has to subtract 128 to derive the usual OS error code.

On our systems that run Torque/Maui/Moab, exit code 271 means that your program has exceeded one of the resource limits you specified when you submitted your job, typically either the runtime limit or the memory limit. One can correct this by setting a larger runtime limit with the sqsub -r flag (up to the limit allowed by the queue, typically 7 days) or by setting a larger memory limit with the sqsub --mpp flag, depending on the message that was reported in your job output file (exceeding the runtime limit will often only result in a message indicating "killed"). Note that both of these values will be assigned reasonable defaults that depend on the system and may vary from system to system. Another common exit code relating to memory exhaustion is 41 -- this may be reported by a job in the done state and should correspond with an error message in your job output file.

For your convenience, we list OS error codes below:

 1  Operation not permitted
 2  No such file or directory
 3  No such process
 4  Interrupted system call
 5  I/O error
 6  No such device or address
 7  Arg list too long
 8  Exec format error
 9  Bad file number
10  No child processes
11  Try again
12  Out of memory
13  Permission denied
14  Bad address
15  Block device required
16  Device or resource busy
17  File exists
18  Cross-device link
19  No such device
20  Not a directory
21  Is a directory
22  Invalid argument
23  File table overflow
24  Too many open files
25  Not a typewriter
26  Text file busy
27  File too large
28  No space left on device
29  Illegal seek
30  Read-only file system
31  Too many links
32  Broken pipe
33  Math argument out of domain of func
34  Math result not representable
35  Resource deadlock would occur
36  File name too long
37  No record locks available
38  Function not implemented
39  Directory not empty
40  Too many symbolic links encountered
41  (Reserved error code)
42  No message of desired type
43  Identifier removed
44  Channel number out of range
45  Level 2 not synchronized
46  Level 3 halted
47  Level 3 reset
48  Link number out of range
49  Protocol driver not attached
50  No CSI structure available
51  Level 2 halted
52  Invalid exchange
53  Invalid request descriptor
54  Exchange full
55  No anode
56  Invalid request code
57  Invalid slot
58  (Reserved error code)
59  Bad font file format
60  Device not a stream
61  No data available
62  Timer expired
63  Out of streams resources
64  Machine is not on the network
65  Package not installed
66  Object is remote
67  Link has been severed
68  Advertise error
69  Srmount error
70  Communication error on send
71  Protocol error
72  Multihop attempted
73  RFS specific error
74  Not a data message
75  Value too large for defined data type
76  Name not unique on network
77  File descriptor in bad state
78  Remote address changed
79  Can not access a needed shared library
80  Accessing a corrupted shared library
81  .lib section in a.out corrupted
82  Attempting to link in too many shared libraries
83  Cannot exec a shared library directly
84  Illegal byte sequence
85  Interrupted system call should be restarted
86  Streams pipe error
87  Too many users
88  Socket operation on non-socket
89  Destination address required
90  Message too long
91  Protocol wrong type for socket
92  Protocol not available
93  Protocol not supported
94  Socket type not supported
95  Operation not supported on transport endpoint
96  Protocol family not supported
97  Address family not supported by protocol
98  Address already in use
99  Cannot assign requested address
100 Network is down
101 Network is unreachable
102 Network dropped connection because of reset
103 Software caused connection abort
104 Connection reset by peer
105 No buffer space available
106 Transport endpoint is already connected
107 Transport endpoint is not connected
108 Cannot send after transport endpoint shutdown
109 Too many references: cannot splice
110 Connection timed out
111 Connection refused
112 Host is down
113 No route to host
114 Operation already in progress
115 Operation now in progress
116 Stale NFS file handle
117 Structure needs cleaning
118 Not a XENIX named type file
119 No XENIX semaphores available
120 Is a named type file
121 Remote I/O error
122 Quota exceeded
123 No medium found
124 Wrong medium type
125 Operation Cancelled
126 Required key not available
127 Key has expired
128 Key has been revoked
129 Key was rejected by service

Getting Help

I have encountered a problem while using a SHARCNET system and need help, who should I talk to?

If you have access to the Internet, we encourage you to use the problem ticketing system (described in detail below) through the web portal. This is the most efficient way of reporting a problem as it minimizes email traffic and will likely result in you receiving a faster response than through other channels.

You are also welcome to contact system administrators and/or high performance technical computing consultants at any time. You may find their contact information on the directory page.

How long should I expect to wait for support?

Unfortunately SHARCNET does not have adequate funding to provide support 24 hours a day, 7 days a week. User support and system monitoring is limited to regular business hours: there is no official support on weekends or holidays, or outside 9:00 - 17:00 EST .

Please note that this includes monitoring of our systems and operations, so typically when there are problems overnight or on weekends/holidays system notices will not be posted until the next business day.

SHARCNET Problem Ticket System

What is a "problem ticket system"?

This is a system that allows anyone with a SHARCNET account to start a persistent email thread that is referred to as a "problem ticket". The thread is stored indefinitely by SHARCNET and can be consulted by any SHARCNET user in the future. When a user submits a new ticket it will be brought to the attention of an appropriate and available SHARCNET staff member for resolution.

You can find the SHARCNET ticket system here, or by logging into our website and clicking on "Help" then "Problems" in the top left-hand-side menu.

How do I search for existing tickets ?

Type a meaningful string into the search box when logged into the SHARCNET web portal. You can find this text entry box beside the Go button on the top right-hand-side of the page in the web portal.

It is recommended that one use specific words when searching, for example the exit code returned in your job output, or the error message produced when attempting a command. Use of common search terms may produce too many results and this coupled with the lack of sophisticated ranking for results means your search will likely be misleading or time consuming if you have to sift through many results by hand.

What do I need to specify in a ticket ?

If you do not find any tickets that deal with you current problem (as illustrated above) then you should ensure you include the following information, if relevant, when submitting a ticket:

  1. use a concise and unique Subject for the ticket
    • this makes it easier to identify in search results, for example
  2. select sensible values for the System Name and Category drop down boxes
    • this helps guide your ticket to the right staff member as quickly as possible
  3. in the Comment text entry box:
    1. if the problem pertains to a job report the jobid associated with the job
      • this is an integer that is returned by sqsub when you submit the job
      • you can also find a listing of your recent jobs (including their jobid) in the web portal at the bottom of this page
    2. report the exact commands necessary to duplicate the problem, as well as any error output that helps identify the problem
      • if relevant, this should include how the code is compiled, how the job is submitted, and/or anything else you are doing from the command line relating to the problem
    3. if this ticket relates to another ticket, please specify the associated ticket number(s)
    4. if you'd like for a particular staff member to be aware of the ticket, mention them
  4. if you want to expedite resolution you can make your files publicly available ahead of time
    • you should include any relevant files required to duplicate the problem
    • if you're not comfortable with changing your own file permissions, in Comment you can request that a staff member provide a location where you can copy the necessary files, or arrange file transfer via other means. If your code is really sensitive you may have to arrange to meet in person to show them the problem.

How do I submit a ticket?

We recommend that you read the above section on what to specify in a ticket before submitting a new ticket.

Users can submit a problem ticket describing an issue, problem or other request and they will then receive messages concerning the ticket via email (the ticket can also be consulted via the web portal).

You can also open a ticket automatically by emailing help@sharcnet.ca with the email address associated with your SHARCNET account.

How do I give other users access to my files ?

There are two ways to provide other users with access to your files. The first is by changing the file attributes of your directories directly with the chmod command and the second is by using file access control lists (acl). Using ACLs is more flexible as it allows you to specify individual users and groups and their respective privileges, whereas using chmod is more coarse grained and only allows you to set the permissions for your group and global access. At present ACLs are only supported on the SHARCNET global work (/work) and home (/home) filesystems.

providing global access with chmod

The following instructions provide commands that you can use to make your files available to all SHARCNET users, including staff. It assumes that you are sharing files in your /home directory, but you can change this to /work or /scratch or etc. with the same effect.

In the below, instead of <your-subdirectory> , type the name of the subdirectory where the files you want to give access to are located:

(1) Go to your home directory by running:

       cd

(2) Authorize access to the home directory by running:

       chmod o+x  .

(3) Authorize access to <your-subdirectory> by running:

       chmod -R o+rX <your-subdirectory>

Note that if the directory you wish to share is nested multiple directories below your home directory (eg. you want to give access to ~/dir1/dir2, but not ~/dir1) you will have to run:

       chmod o+x .

in any of the intervening directories if they are not already globally accessible.

To restrict all public access due to these changes one can simply run the following commands:

       cd
       chmod o-x .

providing per-user/group access with setfacl

An Access Control List (ACL) is basically a list of users and groups with their associated file access privileges which is associated with a file/directory. At present ACLs are only supported on the SHARCNET global work (/work) and home (/home) filesystems.

One can see the ACL for a particular file/directory with the getfacl command, eg.

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
group::r-x
other::--x

One uses the setfacl command to modify the ACL for a file/directory. To add read and execute permissions for this directory for user ricky, eg.

[sn_user@hnd50 ~]$ setfacl -m u:ricky:rx /work/sn_user

Now there is an entry for user:ricky with r-x permissions:

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
user:ricky:r-x
group::r-x
mask::r-x
other::--x

To remove an ACL entry one uses the setfacl command with the -x argument, eg.

[sn_user@hnd50 ~]$ setfacl -x u:ricky /work/sn_user

Now there is no longer an entry for ricky:

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
group::r-x
mask::r-x
other::--x

Note that if one wants to provide access to a nested directory then the permissions need to be changed on all the parent directories. Please see the man pages for these commands man getfacl; man setfacl for further information. If you'd like help utilizing ACLs please email help@sharcnet.ca.

I am new to parallel programming, where can I find quick references at SHARCNET?

SHARCNET has a number of training modules on parallel programming using MPI, OpenMP, pthreads and other frameworks. Each of these modules has working examples that are designed to be easy to understand while illustrating basic concepts. You may find these along with copies of slides from related presentations and links to external resources on the Main Page of this training/help site.

I am new to parallel programming, can you help me get started with my project?

Absolutely. We will be glad to help you from planning the project, architecting your application programs with appropriate algorithms and choosing efficient tools to solve associated numerical problems to debugging and analyzing your code. We will do our best to help you speed up research.

Can you install a package on a cluster for me?

Certainly. We suggest you make the request by sending e-mail to help@sharcnet.ca, or opening a problem ticket with the specific request.

I am in a process of purchasing computer equipment for my research, would you be able to provide technical advice on that?

If you tell us what you want, we may be able to help you out.

Does SHARCNET have a mailing list or user group?

Yes. You may subscribe to one or more mailing lists on the email list page available once you log into the web portal. To find it, please go to MyAccount - Settings - Details in the menu bar on the left and then click on Mail on the "details" page. Don't forget to save your selections.

Does SHARCNET provide any training on programming and using the systems?

Yes. SHARCNET provides workshops on specific topics from time to time and offers courses at some sites. Every June, SHARCNET holds an annual summer school with a variety of in-depth, hands-on workshops. All materials from past workshops/presentations can be found on the SHARCNET web portal.

How do I watch tickets I don't own?

There are two ways. First, to view the tickets of user "USERID", type the URL like below:

https://www.sharcnet.ca/my/problems/view?username=USERID

,where USERID is the user you want to see. In the "Actions" column, click on "watch" for problems that you want to follow. This should enable you to receive notifications if any of the problems you are "watching" are updated.

If you want to do the same thing for tickets posted by other members in your group, just access their userpage (listed on https://www.sharcnet.ca/my/users/show/361 )

The other way is to use 'search box' in the SHARCNET website. By typing the ticket number or userid, you can do similar thing described above.

Research at SHARCNET

Where can I find what other people do at SHARCNET?

You may find some of the research activities at SHARCNET by visiting our research initiatives and researcher profile pages.

I have a research project I would like to collaborate on with SHARCNET, who should I talk to?

You may contact SHARCNET head office or contact members of the SHARCNET technical staff.

How can I contribute compute resources to SHARCNET so that other researchers can share it?

Most people's research is "bursty" - there are usually sparse periods of time when some computation is urgently needed, and other periods when there is less demand. One problem with this is that if you purchase the equipment you need to meet your "burst" needs, it'll probably sit, underutilized, during other times.

An alternative is to donate control of this equipment to SHARCNET, and let us arrange for other users to use it when you are not. We prefer to be involved in the selection and configuration of such equipment. Some of SHARCNET's most useful clusters were created this way — Goblin and Wobbie were purchased with user contributions. Our promise to contributors is that as much as possible, they should obtain as much benefit from the cluster as if it were not shared. Owners get preferential access. Naturally, owners are also able to burst to higher peak usage, since their equipment has been pooled with other contributions. (Technically, SHARCNET cannot itself own such equipment — it remains owned by the institution in question, and will be returned to the contributor upon request.) If you think this model will also work for you and you would like to contribute your computational resource to help the research community at SHARCNET, you can contact us for such arrangement.

I do not know much about computation, nor is it my research interest. But I am interested in getting my research done faster with the help of the high performance computing technology. In other words, I do not care about the process and mechanism, but only the final results. Can SHARCNET provide this type of help?

We will be happy to bring the technology of high performance computing to you to accelerate your research, if at all possible. If you would like to discuss your plan with us, please feel free to contact our high performance computing specialists. They will be happy to listen to your needs and are ready to provide appropriate suggestions and assistance.

I am a faculty member from non-SHARCNET member institution. Could I apply for an account and sponsor my student's accounts?

Whether or not a Canadian faculty researcher is part of a SHARCNET member institution has little bearing on their account - the main difference is that users who are from institutions that are external to the consortium are reviewed annually by the Scientific Director rather than a local SHARCNET site representative. External faculty can sponsor accounts for students in the same fashion as faculty at member institutions. The biggest difference is lack of access to our fellowship programs.

Fellowships at SHARCNET

I heard SHARCNET offers fellowships, where can I get more information?

You may find additional information regarding fellowships and other dedicated resource opportunities on the Research Fellowships page of the web portal. A dedicated online FAQ is also available.

I would like to do some research at SHARCNET as a visiting scholar, how should I apply?

In general, you will need to find a hosting department or a person affiliated with one of the SHARCNET institutions. You may also contact us directly for more specific information.

I would like to send my students to SHARCNET to do some work for me. How should I proceed?

See above.


Contacting SHARCNET

How do I contact SHARCNET for research, academic exchanges, and technical issues?

Please contact SHARCNET head office.

How do I contact SHARCNET for business development, education and other issues?

Please contact SHARCNET head office.

How to Acknowledge SHARCNET in Publications

How do I acknowledge SHARCNET in my publications?

We recommend one cite the following:

This work was made possible by the facilities of the Shared Hierarchical 
Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada.

I've seen different spellings of the name, what is the standard spelling of SHARCNET?

We suggest the spelling SHARCNET, all in upper case.


What types of research programs / support are provided to the research community?

Our overall intent is to provide support that can both respond to the range of needs that the user community presents and help to increase the sophistication of the community and enable new and larger-in-scope applications making use of SHARCNET's HPC facilities. The range of support can perhaps best be understood in terms of a pyramid:

Level 1

At the apex of the pyramid, SHARCNET supports a small number of projects with dedicated programmer support. The intent is to enable projects that will have a lasting impact and may lead to a "step change" in the way research is done at SHARCNET. Inter-disciplinary and inter-institutional projects are particularly welcomed. Projects can expect to receive support at the level of 2 to 6 months direct support per year for one to two years. Programming time is allocated through a competitive process. See the guidelines.

Level 2

The middle layers of support are provided through a number of initiatives.

These include:

  • Programming support of more modest duration (several days to one month engagement, usually part time)
  • Training on a variety of topics through workshops, seminars and online training materials
  • Consultation. This may include user-initiated interactions on particular programs, algorithms, techniques, debugging, optimization etc., as well as unsolicited help to ensure effective use of SHARCNET systems
  • Site Leaders play an important role in working with the community to help researchers connect with SHARCNET staff and to obtain appropriate help and support.

Level 3

The base level of the pyramid handles the very large number of small requests that are essential to keeping the user community working effectively with the infrastructure on a day-to-day basis. Several of these can be answered by this FAQ; many of the issues are presented through the ticketing system. The support is largely problem oriented with each problem being time limited.