From Documentation
Jump to: navigation, search
Sharcnet logo.jpg
Knowledge Base / Expanded FAQ

This page is a comprehensive collection of essential information needed to use SHARCNET, gathered conveniently on a single page of our Help Wiki. If you are a new SHARCNET user, this page most likely contains all you need to get going on SHARCNET. However, there is much more information in this Help Wiki. Please use the search box to find pages that may be relevant to you. You can also go to the Main Page of this wiki for a general table of contents. Finally, you can also look at the list of all articles in this Help Wiki or a list of all categories.


Contents


About SHARCNET

What is SHARCNET?

SHARCNET stands for Shared Hierarchical Academic Research Computing Network. Established in 2000, SHARCNET is the largest high performance computing consortium in Canada, involving 18 universities and colleges across southern, central and northern Ontario.

SHARCNET is a member consortium in the Compute/Calcul Canada national HPC platform.

Where is SHARCNET?

The main office of SHARCNET is located in the Western Science Centre at The University of Western Ontario. The SHARCNET high performance clusters are installed at a number of the member institutions in the consortium and operated by SHARCNET staff across different sites.

What does SHARCNET have?

The infrastructure of SHARCNET consists of a group of 64-bit high performance Opteron and Xeon clusters with close to 20,000 CPUs along with a group of storage units deployed at a number of universities and colleges. Those high performance clusters are interconnected with each other through the Ontario Research Innovation Optical Network (ORION) with a private, dedicated connection running at 10 Gigabits per second on major links (some links have 1 Gigabits per second connections). SHARCNET clusters run the Linux operating system.

What can I do with SHARCNET?

If you have a program that takes months to run on your PC, you could probably run it within a few hours using hundreds of processors on the SHARCNET clusters, provided your program is inherently parallelisable. If you have hundreds or thousands of test cases to run through on your PC or computers in your lab, then with hundreds of processors running those cases independently will significantly reduce your test cycles .

If you have used beowulf clusters made of commodity PCs, you may notice a performance improvement on SHARCNET clusters which have high-speed Quadrics, Myrinet and Infiniband interconnects, as well as SHARCNET machines which have large amounts of memory. Also, SHARCNET clusters themselves are connected through a dedicated, private connection over the Ontario Research Innovation Optical Network (ORION).

If you have access to other super computing facilities at other places and you wish to share your ideas with us and SHARCNET users, please contact us. Together we can make SHARCNET better.

Who is running SHARCNET?

The daily operation and development of SHARCNET computational facilities is managed by a group of highly qualified system administrators. In addition, we have a team of high performance technical computing consultants, who are responsible for technical support on libraries, programming and application analysis.

How do I contact SHARCNET?

For technical inquiries, you may send E-mail to help@sharcnet.ca, or contact your local system administrator or HPC specialist. For general inquiries, you may contact the SHARCNET main office.

This page is flagged as an 'ancient wiki page' and is under review in ticket [27403].

Getting an Account with SHARCNET and Related Issues

What is required to obtain a SHARCNET account

Anyone who would like to use SHARCNET may apply for an account. Please bear in mind the following:

  • There are no shared/group accounts, each person who uses SHARCNET requires their own account and must not share their password
  • Applicants who are not faculty (eg. students, postdocs) require an account sponsor who must already have a SHARCNET account. This is typically one's supervisor.
  • There is no fee for academic access, but account sponsors are responsible for reporting their research activities to Compute Canada, and all academic SHARCNET users must obtain a Compute Canada account before they may apply for a SHARCNET account.
  • All SHARCNET users must read and follow the policies listed here

How do I apply for an account?

Applying for an account is either done through the Compute Canada Database (for academic users) or by contacting SHARCNET (for non-academic use). Detailed step-by-step instructions are provided on the Getting_an_Account_with_SHARCNET page.

How do I update / renew my account?

It is no longer necessary to report to SHARCNET. SHARCNET accounts (for academic users) are automatically activated or deactivated based on the status of your primary role with Compute Canada (your primary CCRI), so as long as one ensures that they have completed the Compute Canada account renewal and reporting process their SHARCNET account will be in good standing.

Compute Canada account holders may renew their account at any time, even after it has been expired, by visiting the CCDB and filling out their renewal form. Note that it may take 3-4 business days for your renewal to be confirmed by an account authority at Compute Canada. Note that if you are a sponsored user, you must email us at help@sharcnet.ca to have your SHARCNET account reactivated following expiry.

If anything is unclear or if you have any questions about the Compute Canada account renewal / reporting process please email accounts@computecanada.ca. If you have any questions about account renewals that directly pertain to SHARCNET please email help@sharcnet.ca.

I am changing supervisor or I am becoming faculty, and I already have a SHARCNET account. Should I apply for a new account?

No, you should apply for a new role (CCRI) and indicate that you want your new role to be your primary role. The process is described in detail on the Getting_an_Account_with_SHARCNET page.

I have an existing SHARCNET account and need to link it to a new Compute Canada account, how do I do that?

You first need to get a Compute Canada Role Identifier (CCRI) and then notify SHARCNET that you would like to link your Compute Canada Account (CCI) to your existing SHARCNET account. Detailed step-by-step instructions are provided on the Getting_an_Account_with_SHARCNET page.

What is a role / CCRI ?

A role (CCRI: Compute Canada Role Identifier) is a way to identify you as a person at a point in time. It includes information your position, institution and department, as well as any other roles that sponsor your role or that your role sponsor's.

Each person may have one or more roles that are associated with each of their current and past positions. These various roles ultimately link back to ones CCI (Compute Canada Identifier).

If the roles are created through Compute Canada they are referred to as a CCRI (Compute Canada Role Identifier), although other roles pre-dating Compute Canada also exist.

In practice you only need to be concerned with your role (and the appropriate role from your sponsor) when applying for accounts, running jobs in particular projects associated with a particular group/sponsor, or when viewing the SHARCNET web portal with multiple roles (you may see different information in the web portal depending on which role you have selected to be active).

For further information about roles please see the SHARCNET-specific role information here and the more general Compute Canada specific information here.

Can I have multiple roles ?

Yes, though the behavior may differ depending on which systems (consortia) you are using. For SHARCNET we assume that your primary role indicates the group for which you'd like to associate your account with by default. You can select which of your active roles are primary when you apply for a new role, or from your Compute Canada account profile. At SHARCNET, your unix group membership will map to the group associated with your primary role, and by default jobs will be accrued to your primary role's group.

That being said:

  • You may still change group file ownership to your other role's group, see the chgrp command, and newgrp
  • You can select which group for which you'd like a job to be accrued to with the sqsub -p flag, for example, if you have more than one sponsor and your non-primary sponsor has the username smith, you can attribute your usage to the smith project/accounting group like this:
 sqsub -o foo.log -r 5h -p smith ./foo

In terms of usage policy, you can use storage that is available to either group, but your personal storage will be limited as for any other user (you don't get a double quota for /work/$USER or /home/$USER). You can accrue CPU usage to either group and your job will be impacted by the fair share status of the group for which you submit the job to be accounted towards.

Can I just have a cluster account without having a web portal account?

No. The web portal account is an online interface to your account in our user database. It provides a way of managing your information and keeping track of problems you may encounter.

Can I E-mail or call to open an account?

No, please follow the instructions above.

OK, I've seen and heard the word "web portal" enough, what is it anyway?

A web portal is a web site that offers online services. Usually a web portal has a database at the backend, in which people can store and access personal information, but may involve other software services like this wiki. At SHARCNET, registered users can login to the web portal, manage their profiles, submit and review programming and performance related problems, look-up solutions to problems, contribute to our wiki, and assess their SHARCNET usage, amongst other things.

My supervisor forgot all about his/her username/CCRI, so my application can't go through, what should I do?

Please have them send an E-mail to help@sharcnet.ca and we will re-inform them of their login credentials.

My supervisor does not use SHARCNET, why is my supervisor asked to have an account anyway?

Your supervisor's account ID is used to identify which group your account belongs to. We account for all usage and provide default at the group level.

Is there any charge for using SHARCNET?

SHARCNET is free for all academic research. If you are working outside of academia we recommend you read our Commercial Access Policy which can be found in the SHARCNET web portal here.

I forgot my password

You can reset your password here, or by clicking the "Forget password" link after trying to sign-in.

I forgot my username

If you forget your username, please send an E-mail to help@sharcnet.ca. Your username for the web portal and cluster account are the same.

My account has been disabled (so i cannot login). What should I do ?

At present all academic SHARCNET accounts are automatically enabled/disabled based on the status of your corresponding Compute Canada roles. If your SHARCNET account is disabled it was most likely due to your Compute Canada account becoming expired as a result of not completing the Compute Canada account renewal / reporting process. You should have been sent an email from Compute Canada indicating why your account was deactivated.

To renew your account (you may do this even after your account is expired), log into the CCDB and complete the reporting process. Once your renewal is approved your SHARCNET account will be automatically reactivated (note that once requested, renewal may take up to 3-4 business days as a local account administrator must verify your reporting information). NOTE: if your sponsor's Compute Canada account was expired and deactivated, you must request that we reactivate your SHARCNET account manually after you've renewed your Compute Canada account - email help@sharcnet.ca.

If you have questions concerning your account please email help@sharcnet.ca.

How do I change the email address associated with my account?

If you wish to use a new email address you have to update your Contact Information at the Compute Canada Database. Contact information is now updated at SHARCNET automatically based on what you have indicated to Compute Canada.

I've changed institutions and want to update the affiliation of my primary role

To change your primary role to your new institution, goto the Compute Canada My Account Add Role page and complete the information to apply for a new role, be sure to tick both: Make this role primary and Disable old roles.

I no longer want my SHARCNET account

If you would like to cease using SHARCNET (including access to all systems and list email) email help@sharcnet.ca. Please let us know if you'd like to disable your corresponding Compute Canada role (resulting in all it's associated Compute Canada consortia accounts being disabled as well) or if you'd just like to disable your SHARCNET account independent of your other consortia accounts.

You should only request this if you want your account disabled *now* - if you do not complete the annual renewal process at Compute Canada your account will eventually be deactivated automatically.

The Acceptable Use Policy, in particular pt. 36, outlines our policy in the event that an account is disabled.

You may have your account re-enabled by emailing help@sharcnet.ca.


Logging in to Systems, Transferring and Editing Files

How do I login to SHARCNET?

There is no single point of entry at present. "Logging in to SHARCNET" means you login to one of the SHARCNET systems. A complete list of SHARCNET systems can be found on our facilities page.

Unix/Linux/OS X

To login to a system, you need to use an Secure Shell (SSH) connection. If you are logging in from a UNIX-based machine, make sure it has an SSH client (ssh) installed (this is almost always the case on UNIX/Linux/OS X). If you have the same login name on both your local system and SHARCNET, and you want to login to, say, saw, you may use the command:

ssh saw.sharcnet.ca

If your SHARCNET username is different from the username on your local systems, then you may use either of the following forms:

ssh saw.sharcnet.ca -l username
ssh username@saw.sharcnet.ca

If you want to eastablish an X window connection so that you can use graphics applications such as gvim and xemacs, you can add a -Y to the command:

ssh -Y username@saw.sharcnet.ca

This will automatically set the X DISPLAY variable when you login.

Windows

If you are logging from a computer running Windows and need some pointers we recommend consulting our SSH tutorial.

What is the difference between Login Nodes and Development Nodes?

Login Nodes

Most of our clusters have distinct login nodes associated with them that you are automatically redirected to when you login to the cluster (some systems are directly logged into, eg. SMPs and smaller specialty systems). You can use these to do most of your work preparing for jobs (compiling, editing configuration files) and other low-intensity tasks like moving and copying files.

You can also use them for other quick tasks, like simple post-processing, but any significant work should be submitted as a job to the compute nodes. On most login nodes, each process is limited to 1 cpu-hour; this will be noticable if you perform anything compute-intensive, and can affect IO-oriented activity as well (such as very large scp or rsync operations.)

Here is an example of logging in and being redirected to a saw login node, in this case saw-login1:

localhost:~ sn_user$ ssh saw.sharcnet.ca
Last login: Fri Oct 14 22:38:40 2011 from localhost.your_institution.ca

Welcome to the SHARCNET cluster Saw.
Please see the following URL for status of this and other clusters:
https://www.sharcnet.ca/my/systems


[sn_user@saw-login1 ~]$ hostname
saw-login1

Development Nodes

On some systems there are also development nodes which can be used to do slightly more resource intensive, interactive work. For the most part these are identical to cluster login nodes, however they are not visible outside of their respective cluster (one can only reach them after logging into a login node) and they have more modest resource limits in place, allowing for the ability to do quick interactive testing outside of the job queuing system. Please see the help wiki pages for the respective clusters, Orca, Saw and Kraken, for further details on how one can use these nodes.

How can I suspend and resume my session?

The program screen can start persistent terminals from which you can detach and reattach. The simplest use of screen is

screen -dR

which will either reattach you to any existing session or create a new one if one doesn't exist. To terminate the current screen session, type exit. To detach manually (you are automatically detached if the connection is lost) press ctrl+a followed by d, you can the resume later as above (ideal for running background jobs). Note that ctrl+a is screen's escape sequence, so you have to do ctrl+a followed by a to get the regular effect of pressing ctrl+a inside a screen session (e.g., moving the cursor to the start of the line in a shell).

For a list of other ctrl+a key sequences, press ctrl+a followed by ?. For further details and command line options, see the screen manual (or type man screen on any of the clusters).

Other notes:

  • If you want to create additional "text windows", use Ctrl-A Ctrl-C. Remember to type "exit" to close it.
  • To switch to a "text window" with a certain number, use Ctrl-A # (where # is 0 to 9).
  • To see a list of window numbers use Ctrl-A w
  • To be presented a list of windows and select one to use, use Ctrl-A " (This is handy if you've made too many windows.)
  • If the program running in a screen "text window" refuses to die (i.e., it needs to be killed) you can use Ctrl-A K
  • For brief help on keystrokes use Ctrl-A ?
  • For extensive help, run "man screen".

What operating systems are supported?

UNIX in general. Currently, Linux is the only operating system used within SHARCNET.

What makes a cluster different than my UNIX workstation?

If you are familiar with UNIX, then using a cluster is not much different from using a workstation. When you login to a cluster, you in fact only log in to one of the cluster nodes. In most cases, each cluster node is a physical machine, usually a server class machine, with one or several CPUs, that is more or less the same as a workstation you are familiar with. The difference is that these nodes are interconnected with special interconnect devices and the way you run your program is slightly different. Across SHARCNET clusters, you are not expected to run your program interactively. You will have to run your program through a queueing system. That also means where and when your program gets to run is not decided by you, but by the queueing system.

Which cluster should I use?

Each of our clusters is designed for a particular type of job. Our cluster map shows which systems are suitable for various job types.

What programming languages are supported?

Those primary programming languages such as C, C++ and Fortran are supported. Other languages, such as Java, Pascal and Ada, are also supported, but with limited technical support from us. If your program is written in any language other than C, C++ and Fortran, and you encounter a problem, we may or may not be able solve it within a short period of time. Note: this does not mean you can't use other languages like Matlab, R, Python, Perl, etc. We normally think of those as "scripting" languages, but that doesn't imply that good HPC necessarily requires an explicitly-compiled language like Fortran.

How do I organize my files?

To best meet a range of storage needs, SHARCNET provides a number of distinct storage pools that are implemented using a variety of file systems, servers, RAID levels and backup policies. These different storage locations are summarized as follows:

place quota** expiry access purpose backed-up?
/home 10 GB none unified sources, small config files Yes
/work 1 TB none unified* active data files No
/scratch none 2 months per-cluster temporary files, checkpoints No
/tmp none 2 days per-node node-local scratch No
/freezer 2 TB 2 years unified (login nodes only) long term data archive No
  • The quota column indicates if the file system has a per-user limit to the amount of data they can store.
  • The expiry column indicates if the file system automatically deletes old files and the timescale for deletion.
  • The access column indicates the scope, or availability of the file system. "unified" means that when you login, regardless of cluster, you will always see the same directory.

* May be less and not unified on some of our clusters (eg. requin and some of the specialty systems), type "quota" when you log into a cluster for up to date information.
** There is also a quota on the maximum number of files a user can have on any file system. Currently the limit is 1,000,000.

For more detailed information please go to the Using Storage article.

Where is my /work folder?

/work is an automounted filesystem. When you first login to a system your directory may not appear in the /work directory. As soon as you access it (cd to it, or ls it or it's contents), the system will make your directory visible and it will appear in the /work directory. If you are connecting with a gui client you need to go to the full path of your work directory /work/YOUR_USER_NAME .

Best storage to use for jobs

Since /home is remote on most clusters and is used frequently by all users, it's important that it not be used significantly for jobs (eg. reading in a small configuration file from /home is ok - writing repeatedly to many different files in /home during the course of your jobs is not).

One can do significant I/O to /work from jobs, but it is also remote to most clusters. For this reason, to obtain the best file system throughput you should use the /scratch file system. In some cases jobs may be able to make use of /tmp for local caching, but it is not recommended as a storage target for regular output.

For users who want to learn more about optimizing I/O at SHARCNET please read Analyzing I/O Performance.

Cluster-local Scratch storage

/scratch has no quota limit - so you can put as much data in /scratch/<userid> as you want, until there is no more space. The important thing to note though, is that all files on /scratch that are over 62 days old will be automatically deleted (please see this knowledge base entry for details on how /scratch is purged of old files).

Backups

Backups are in place for your home directory ONLY. Scratch and global work are not backed up. In general we store one version of each file for the previous 5 working days, one for each of the 4 previous weeks, and one version per month before that. Backups began in September 2006.

Node-Local Storage

/tmp may be unavailable for use on clusters where there are no local disks on the compute nodes. Users should try to use /scratch instead, or email help@sharcnet.ca to discuss using node-local storage.

Archival Storage

To backup large volumes of data that don't need to stay available on global work or local scratch use the /freezer filesystem.

Please note: unlike our old /archive file system, the new /freezer file system has both a size quota (2TB; going over the quota results in your submitted jobs not running - same as with /work), and an expiry: after 2 years your files will be deleted. See our storage policies page for details.

How are file permissions handled at SHARCNET?

By default, anyone in your group can read and access your files. You can provide access to any other users by following this Knowledge Base entry.

All SHARCNET users are associated with a primary GID (group id) belonging to the PI of the group (you can see this by running id username , with your username). This allows for groups to share files without any further action, as the default file permissions for all SHARCNET storage locations (Eg. /gwork/user ) allows read (list) and execute (enter / access) permissions for the group, eg. they appear as:

  [sn_user@req770 ~]$ ls -ld /gwork/sn_user
  drwxr-x---  5 sn_user sn_group 4096 Jan 25 22:01 /gwork/sn_user

Further, by default the umask value for all users is 0002, so any new files or directories will continue to provide access to the group.

Should you wish to keep your files private from all other users, you should set the permissions on the base directory to only be accessible to yourself. For example, if you don't want anyone to see files in your home directory, you'd run:

chmod 700 ~/

If you want to ensure that any new files or directories are created with different permissions, you can set your umask value. See the man page for further details by running:

man umask

For further information on UNIX-based file permissions please run:

man chmod

What about really large files or if I get the error 'No space left on device' in /gwork or /scratch?

If you need to work with really large files we have tips on optimizing performance with our parallel filesystems here.

How do I transfer files/directories to/from or between cluster?

Unix/Linux

To transfer files to and from a cluster on a UNIX machine, you may use scp or sftp. For example, if you want to upload file foo.f to cluster orca from your machine myhost, use the following command

myhost$ scp foo.f orca.sharcnet.ca:

assuming that your machine has scp installed. If you want to transfer a file from Windows or Mac, you need have scp or sftp for Windows or Mac installed.

If you transfer file foo.f between SHARCNET clusters, say from your home directory on orca to your scratch directory on requin, simply use the following command

[username@orc-login2:~]$ scp foo.f requin:/scratch/username/

If you are transferring files between a UNIX machine and a cluster, you may use scp command with -r option. For instance, if you want to download the subdirectory foo in the directory project in your home directory on saw to your local UNIX machine, on your local machine, use command

myhost$ scp -rp saw.sharcnet.ca:project/foo .

Similarly, you can transfer the subdirectory between SHARCNET clusters. The following command

[username@orc-login2:~]$ scp -rp requin:/scratch/username/foo .

will download subdirectory foo from your scratch directory on requin to your home directory on orca (note that the prompt indicates you are currently logged on to orca).

The use of -p option above will preserve the time stamp of each file. For Windows and Mac, you need to check the documentation of scp for features.

You may also tar and compress the entire directory and then use scp to save bandwidth. In the above example, first you login to orca, then do the following

[username@orc-login2:~]$ cd project
[username@orc-login2:~]$ tar -cvf foo.tar foo
[username@orc-login2:~]$ gzip foo.tar

Then on your local machine myhost, use scp to copy the tar file

myhost$ scp orca.sharcnet.ca:project/foo.tar.gz .

Note for most Linux distributions, tar has an option -z that will compress the .tar file using gzip.

Windows

You may read the instruction using ssh client. [[1]]

How can I best transfer large quantities of data to/from SHARCNET and what transfer rate should I expect?

In general, most users should be fine using scp or rsync to transfer data to and from SHARCNET systems. If you need to transfer a lot of files rsync is recommended to ensure that you do not need to restart the transfer from scratch should there be a connection failure. Although you can use scp and rsync to any cluster's login node(s), it is often best to use dtn.sharcnet.ca - it is dedicated to data transfer.

In general one should expect the following transfer rates with scp:

  • If you are connecting to SHARCNET through a Research/Education network site (ORION, CANARIE, Internet2) and are on a fast local network (this is the case for most users connecting from academic institutions) then you should be able to attain sustained transfer speeds in excess of 10MB/s. If your path is all gigabit or better, you should be able to reach rates above 50 MB/s.
  • If you are transferring data over the wider internet, you will not be able to attain these speeds, as all traffic that does not enter/exit SHARCNET via the R&E net is restricted to a limited-bandwidth commercial feed. In this case one will typically see rates on the order of 1MB/s or less.

Keep in mind that filesystems and networks are shared resources and suffer from contention; if they are busy the above rates may not be attainable

If you need to transfer a large quantity of data to SHARCNET and are finding your transfer rate to be slow please contact help@sharcnet.ca to request assistance. We can provide additional tips and tools to greatly improve data transfer rates, especially to systems/users outside of Ontario's regional ORION network. For example, we've observed speed-ups from <1 MB/s using scp to well over 10 MB/s between Compute Canada systems connected via CANARIE by using specialized data-transfer programs (eg. bbcp).

How do I access the same file from different subdirectories on the same cluster ?

You should not need copy large files on the same cluster (e.g. from one user to another or using the same file in different subdirectories). Instead of using scp you might consider issuing a "soft link" command. Assume that you need access to the file large_file1 in subdirectory /work/user1/subdir1 and you need it to be in your subdirectory /work/my_account/my_dir from where you will invoke it under the name my_large_file1. Then go to that directory and type:

ln -s /work/user1/subdir1/large_file1    my_large_file1

Another example, assume that in subdirectory /work/my_account/PROJ1 you have several subdirectories called CASE1, CASE2, ... In each subdirectory CASEn you have a slightly different code but all of them process the same data file called test_data. Rather than copying the test_data file into each CASEn subdirectory, place test_data above i.e. in /work/my_account/PROJ1 and then in each CASEn subdirectory issue following "soft link" command:

ln -s ../test_data  test_data

The "soft links" can be removed by using the rm command. For example, to remove the soft link from /work/my_account/PROJ1/CASE2 type following command from this subdirectory:

rm -rf test_data

Typing above command from subdirectory work/my_account/PROJ1 would remove the actual file and then none of the CASEn subdirectories would have access to it.

How are files deleted from the /scratch filesystems?

All files on /scratch that are over 2 months old (not old in the common sense, please see below) are automatically deleted. You will be sent an email notification beforehand warning you of any filesystems (not the actual files, however) where you may have files scheduled for deletion in the immediate future.

An unconventional aspect of this system is that it does not determine the age of a file based on the file's attributes, e.g., the dates reported by the stat, find, ls, etc. commands. The age of a file is determined based on whether or not its data contents (i.e., the information stored in the file) have changed, and this age is stored externally to the file. Once a file is created in /scratch/<userid> , reading it, renaming, changing the file's timestamps with the touch command, or copying it into another file are all irrelevant in terms of changing its age with respect to the purging system. The file will be expired 2 months after it was created. Only files where the contents have changed will have their age counter "reset".

Unfortunately, there currently exists no method to obtain a listing of the files that are scheduled for deletion. This is something that is being addressed, however there is no estimated time for implementation.

If you have data in /scratch that needs to persist (eg. configuration files, important simulation output) we recommend you stage it from /gwork or /freezer it as appropriate.

How to archive my data?

Presently SHARCNET provides the /freezer filesystem as a regularly accessible filesystem on the login nodes of our clusters (not the compute nodes!). To back up data which you'd like too keep, but don't expect to access in the foreseeable future, or to just keep a backup of data from global work or local scratch filesystems, one may use regular commands (cp, mv, rm, rsync, tar etc.), eg.

 cp /scratch/$USER/$SIMULATION /freezer/$USER/$SIMULATION

Be extremely careful when deleting your data from the Archive: there is no backup for the data!

Please note: unlike our old /archive file system, the new /freezer file system has both a size quota (2TB; going over the quota results in your submitted jobs not running - same as with /work), and an expiry: after 2 years your files will be deleted. See our storage policies page for details.

How can I check the hidden files in directory?

The "." at the beginning of the name means that the file is "hidden". You have to use the -a option with ls to see it. I.e. 'ls -a'.

If you want to display only the hidden files then type:

ls -d .*

Note: there is an alias which is loaded from /etc/bashrc (see your .bashrc file). The alias is defined by alias l.='ls -d .* --color=tty' and if you type:

l.

you will also display only the hidden files.

How can I count the number of files in a directory?

One can use the following command to count the number of files in a directory (in this example, your /work directory):

find /work/$USER -type f   | wc -l

It is always a good idea to archive and/or compress files that are no longer needed on the filesystem (see below). This helps minimize one's footprint on the filesystem and as such the impact they have on other users of the shared resource.

How to organize a large number of files?

With parallel cluster filesystems, you will get best I/O performance writing data to a small number of large files. Since all metadata operations on each of our parallel filesystems are handled by a single file server, depending on how many files are being accessed the server can become overwhelmed leading to poor overall I/O performance for all users. If your workflow involves storing data in a large number of files, it is best to pack these files into a small number of larger archives, e.g. using tar command

tar cvf archiveFile.tar directoryToArchive

For better performance with many files inside your archive, we recommend to use DAR (Disk ARchive utility), which is a disk analog of tar (Tape ARchive). Dar can extract files from anywhere in the archive much faster than tar. The dar command is available by default on sharcnet systems. It can be used to pack files into a dar archive by doing something like:

dar -s 1G -w -c archiveFile -g directoryToArchive

In this example we split the archive into 1GB chunks, and the archive files will be named archiveFile.1.dar, archiveFile.2.dar, and so on. To list the contents of the archive, you can type:

dar -l archiveFile

To temporarily extract files for post-processing into current directory, you would type:

dar -R . -O -x archiveFile -v -g pathToYourFile/fileToExtract

I am unable to connect to one of the clusters; when I try, I am told the connection was closed by the remote host

The most likely cause of this behaviour is repeated failed login attempts. Part of our security policies involves blocking the IP address of machines that attempt multiple logins with incorrect passwords over a short period of time---many brute-force attacks on systems do exactly this: looking for poor passwords, badly configured accounts, etc. Unfortunately, it isn't uncommon for a user to forget their password and make repeated login attempts with incorrect passwords and end up with that machine blacklisted and unable to connect at all.

A temporary solution is simply to attempt to login from another machine. If you have access to another machine at your site, you can shell to that machine first, and then shell to the SHARCNET system (as that machine's IP shouldn't be blacklisted). In order to have your machine unblocked, you will have to file a problem ticket as a system administrator must manually intervene in order to fix it.

NOTE: there are other situations that can produce this message, however they are rarer and more transient. If you are unable to log in from one machine, but can from another, it is most likely the IP blacklisting that is the problem and the above will provide a temporary work-around while your problem ticket is processed.

I can login successfully using WinSCP but I can't find my /work directory and files

Windows tools like winscp want to list the /work directory to let users click to traverse. this doesn't work because of automounting (and also makes no sense given that there would be >4000 entries.) to a commandline user, this is a non-issue, since anything which accesses /work/$user will instantiate the mount (ls -ld /work/$user/ for instance, or cd).

It is recommended to make a default link from /work/$user to /home/$user/work in WinSCP setting.

I am unable to ssh/scp from SHARCNET to my local computer

Most campus networks are behind some sort of firewall. If you can ssh out to SHARCNET, but cannot establish a connection in the other direction, then you are probably behind a firewall and should speak with your local system administrator or campus IT department to determine if there are any exceptions or workarounds in place.

SSH tells me SOMEONE IS DOING SOMETHING NASTY!?

Suppose you attempt to login to SHARCNET, but instead get an alarming message like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
fe:65:ab:89:9a:23:34:5a:50:1e:05:d6:bf:ec:da:67.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:42
RSA host key for requin has changed and you have requested strict checking.
Host key verification failed. 

SSH begins a connection by verifying that the host you're connecting to is authentic. It does this by caching the hosts's "hostkey" in your ~/.ssh/known_hosts file. At times, a hostkey may be changed legitimately; when this happens, you may see such a message. It's a good idea to verify this with us, you may be able to check the fingerprint yourself by logging into another sharcnet system and running:

ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub 

If the fingerprint is OK, the normal way to fix the problem is to simply remove the old hostkey from your known_hosts file. You can use your choice of editor if you're comfortable doing so (it's a plain text file, but has long lines). On a unix-compatible machine, you can also use the following very small script (Substitute the line(s) printed in the warning message illustrated above for '42' here.):

perl -pi -e 'undef $_ if (++$line == 42)' ~/.ssh/known_hosts

Another solution is brute-force: remove the whole known_hosts file. This throws away any authentication checking, and your first subsequent connection to any machine will prompt you to accept a newly discovered host key. If you find this prompt annoying and you aren't concerned about security, you can avoid it by adding a text file named ~/.ssh/config on your machine with the following content:

StrictHostKeyChecking no

Ssh works, but scp doesn't!

If you can ssh to a cluster successfully, but cannot scp to to it, the problem is likely that your login scripts print unexpected messages which confuse scp. scp is based on the same ssh protocol, but assumes that the connection is "clean": that is, that it does not produce any un-asked-for content. If you have something like:

echo "Hello, Master; I await your command..."

scp will be confused by the salutation. To avoid this, simply ensure that the message is only printed on an interactive login:

if [ -t 0 ]; then
    echo "Hello, Master; I await your command..."
fi

or in csh/tcsh syntax:

if ( -t 0 ) then
    echo "Hello, Master; I await your command..."
endif

How do I edit my program on a cluster?

We provide a variety of editors, such as the traditional text-mode emacs and vi (vim), as well as a simpler one called nano. If you have X on your desktop (and tunneled through SSH), you can use the GUI versions (xemacs, gvim).

If your desktop supports FUSE, it's very convenient to simply mount your home tree like this:

mkdir sharcnet
sshfs orca.sharcnet.ca: sharcnet

you can then use any local editor of your choice.

If you run emacs on your desktop, you can also edit a remote file from within your local emacs client using Tramp, opening and saving a file as /username@cluster.sharcnet.ca:path/file.


Compiling and Running Programs

How do I compile my programs

To make it easier to compile across all SHARCNET clusters, we provide a generic set of commands:

cc, c++, f77, f90, f95

and for MPI,

mpicc, mpic++, mpiCC, mpif77, mpif90, mpif95

On most of our clusters (specifically those running Centos 6), what these commands actually invoke is controlled by the modules loaded. The default is the Intel compiler and the corresponding Openmpi library compiled for it. These are very reasonable choices for most programs.

SHARCNET compile wrapper

NOTE - this wrapper is only still available on older Centos 5 systems and will be phased out as we upgrade to Centos 6.

The wrapper commands are all aliased to a common SHARCNET compile script. You can see how compile works and what options are possible by running:

less `which compile` 

on any SHARCNET Centos 5 system.

To see what the compile script actually executes run it with the -show flag, eg.

[snuser@nar316 test]$ cc -intel -llapack -show lapack.c
icc lapack.c -L/opt/sharcnet/intel/11.0.083/icc/mkl/lib/ -lmkl -lguide -lpthread

Here are some basic examples:

cc foo.c -o foo
cc -openmp foo.c -llapack -o foo
f90 *.f90 -lmpi -o my_mpi_prog
mpif90 *.f90 -o my_mpi_prog
f90 -mpi -c a.f90; mpif90 -c b.f90; compile a.o b.o -lmpi -o my_mpi_prog

In the first example, the preferred compiler and optimization flags will be selected, but not much else happens. In the second case, the underlying compiler's OpenMP flag (which differs among compilers) is selected, as well as linking with a system-tuned LAPACK/Blas library. in the third example, an MPI program written in Fortran90 is compiled and linked with whatever cluster-specific MPI libraries are required. The fourth example is identical except that the mpi-prefixed command is used. In the fifth example, two files are separately compiled, then linked with MPI stuff; the point is simply that even for non-linking, you need to declare that you're using MPI by either an mpi-prefixed command or -mpi or -lmpi.

These command commands will invoke the underlying compilers such as Intel or PathScale compilers, whichever are available to the system you are using. For specific compiler options, please refer to the man pages.

You aren't required to use these commands, and may not want to if you have pre-existing Makefiles, for instance. You can always add -v to see what full commands are being generated.

What compilers are available?

For a full listing of all SHARCNET compilers see the Compiler section in the web portal software pages.

The "default" SHARCNET compiler is the Intel compiler. It is installed on all of our systems and its module is loaded by default. To identify which compiler is the default, execute the command "module list".

Generic compiler commands (c++,cc,CC,cxx,f77,f90,f95) are actually aliases which invoke the underlying compiler. To see which compiler is actually called, you would execute:

[ppomorsk@orc-login1:~] c++ --version
c++ (ICC) 12.1.3 20120212
Copyright (C) 1985-2012 Intel Corporation.  All rights reserved.

To see which compiler executable the alias points to, you would do:

[ppomorsk@orc-login1:~] which c++
/opt/sharcnet/intel/12.1.3/snbin/c++
[ppomorsk@orc-login1:~] ls -l /opt/sharcnet/intel/12.1.3/snbin/c++
lrwxrwxrwx 1 root root 39 Sep 11 11:37 /opt/sharcnet/intel/12.1.3/snbin/c++ -> /opt/sharcnet/intel/12.1.3/icc/bin/icpc
[ppomorsk@orc-login1:~]

The corresponding MPI compiler commands (mpic++,mpicc,mpiCC,mpicxx,mpif77,mpif90,mpif95) are also available. What these are set to depends on the openmpi module loaded. When compiling MPI code, it is important that the openmpi module and compiler module match.

If you want to try another compiler, you should load the relevant module, after unloading the intel module. For example, to use open64 you would first unload the default Intel compiler module and then load the module for the desired version of open64.

module unload intel
module load open64/4.5.2

In general, you should choose to use the highest performance compilers. In the past GNU compilers have generally offered inferior performance in comparison to commercial compilers, but recently they have improved. If one plans to do extensive computations, it is advisable to compile code with different compilers and compare performance, then select the best compiler.

What standard (eg. math) libraries are available?

For a full listing of all SHARCNET software libraries see the Library section in the web portal software pages.

If you need to use blas or lapack routines you should consider using the ACML libraries and pathscale compilers on Opteron systems and MKL and intel compilers on on intel hardware. ACML and MKL are vendor optimized libraries including blas and lapack routines. Refer to the ACML and MKL software pages for examples on their use.

Relocation overflow and/or truncated to fit errors

If you get "relocation overflow" and/or "relocation truncated to fit" errors when you compile big fortran 77 codes using pathf90 and/or ifort, then you should try the following:

(A) If the static data structures in your fortran 77 program are greater than 2GB you should try specifying the option -mcmodel=medium in your pathf90 or ifort command.

(B) Try running the code on a different system which has more memory:

   Other clusters that you can try are: requin or hound 

You would probably benefit from looking at the listing of all of the clusters:

https://www.sharcnet.ca/my/systems

and this page has a table showing how busy each one is:

https://www.sharcnet.ca/my/perf_cluster/cur_perf

How do I run a program?

In general, users are expected to run their jobs in "batch mode". That is, one submits a job -- the application problem -- to a queue through a batch queue command, the scheduler schedules the job to run at a later time and sends the results back once the program is finished.

In particular, one will use sqsub command (see What is the batch job scheduling environment SQ? below) to launch a serial job foo

sqsub -o foo.log -r 5h ./foo

This means to submit the command foo as a job with a 5 hour runtime limit and put its standard output into a file foo.log (note that it is important to not put too tight of a runtime limit on your job as it may sometimes run slower than expected due to interference from other jobs).

If your program takes command line arguments, place the arguments after your program name just as when you run the program interactively

sqsub -o foo.log -r 5h ./foo arg1 arg2...

For example, suppose your program takes command line options -i input and -o output for input and output files respectively, they will be treated as the arguments of your program, not the options of sqsub, as long as they appear after your program in your sqsub command

sqsub -o foo.log -r 5h ./foo -i input.dat -o output.dat

If you have more than one sponsor and your non-primary sponsor has the username smith, you can attribute your usage to the smith project/accounting group like this:

sqsub -o foo.log -r 5h -p smith ./foo

To launch a parallel job foo_p

sqsub -q mpi -n num_cpus -o foo_p.log -r 5h ./foo_p

The basic queues on SHARCNET are:

queue usage
serial for serial jobs
mpi for parallel jobs using the MPI library
threaded for threaded jobs using OpenMP or POSIX threads

To see the status of submitted jobs, use command sqjobs.

How do I run a program interactively?

Several of the clusters now provide a collection of development nodes that can be used for this purpose. An interactively session can also be started by submitting a screen -D -m bash command as a job. If your job is a serial job, the submission line should be

sqsub -q serial -r <RUNTIME> -o /dev/null screen -D -fn -m bash

Once the job begins running, figure out what compute node it has launched on

sqjobs

and then ssh to this node and attach to the running screen session

ssh -t <NODE> screen -r

You can access screens options via the ctrl+a key stroke. Some examples are ctrl+a ? to bring up help and ctrl+a a to send a ctrl+a. See the screen man page (man screen) for more information. The message Suddenly the Dungeon collapses!! - You die... is screen's way of telling you it is being killed by the scheduler (most likely because the time you specified for the job has elapsed). The exit command will terminate the session.

If your jobs is a MPI job, the submission line should be

sqsub -q mpi --nompirun -n <NODES> -r 

Once the job starts, the screen sessions will be launch screen on the rank zero node. This may not be the lowest number node allocated, so you have to run

qstat -f -l <JOBID> | egrep exec_host

to find out what node it is (the first one listed). You can then proceed as in the non-mpi case. The command pbsdsh -o <COMMAND> can be used to run commands on all the allocated nodes (see the man pbsdsh), and the command mpirun <COMMAND> can be used to start MPI programs on the nodes.

What about running a program compiled on one cluster on another?

In general, if your program starts executing on a system other than the one it was compiled on, then there are likely no issues. However, you may want to compare results of test jobs just to make sure. The specific list of things to watch out for are

  1. using a particular compiler and/or optimizations,
  2. using a particular library (most frequently a specific MPI implementation), and
  3. using the /home filesystem because it is global.

In general, as long as very specific architecture optimizations are not being used (e.g., -march=native), you should be able to compile a program on one SHARCNET system and run it on others as most systems are binary compatible and the compiler runtime libraries are installed everywhere. In particular, this is true for our larger core systems and should be true for our other specialized systems as well (the big exception are executables compiled to use the GPU - these will only run on GPU clusters like monk or angel). It is worth noting that some compilers produce faster code on particular processors, and some compiler optimizations may not not work on all systems, so you may want to recompile in order to get the best performance. We actually have different default compilers on different systems (Intel on most clusters, Pathscale on requin). It is probably worth doing some comparisons on your own code because our tests show no clear winners.

With regard to MPI, and other libraries, you have to be a little more careful. Most of the core systems have most of the same libraries and use OpenMPI by default though the default version will vary between clusters. Programs which work on one system should be able to run on another system without any modification as long as the OpenMPI version matches (at the end of the day as long as the runtime libraries and the necessary dependencies are installed you shouldn't have any problems). If the default version of OpenMPI on a system is not the one needed, a different version can be selected via "module switch" command.

For example, if you compiled your program on kraken using the default modules there (openmpi/intel/1.4.2 and the required intel/11.0.083 compiler module), you should be able to run the same executable on orca as long as you switch to those modules, which are not loaded on orca by default. The switch is accomplished with:

module switch intel/11.0.083
module switch openmpi/intel/1.4.2

To make sure the right modules are loaded, execute:

module list

Note:Some older systems use HP-MPI, (eg. requin), and a program linked to HP-MPI libraries will not work on OpenMPI systems, and will have to be recompiled.

Another thing to watch out for is using /home because it is global. Because /home is global, it is slow and is not intended to be used as a working directory for running jobs. If your program writes to the local /work and /scratch filesystems on the compute clusters, and you submit the job from /work or /scratch (so that the stdout gets written there), then running the executable from /home should be fine. However, if it is ran from and/or writes to /home, then it will suffer a severe performance penalty. It's probably easiest to set up your working directory in /work and then just symlink to your binary in /home.

My application runs on Windows, can I run it on SHARCNET?

It depends. If your application is written in a high level language such as C, C++ and Fortran and is system independent (meaning it does not depend on any particular third party libraries that are available only for Windows), then you should be able to recompile and run your application on SHARCNET systems. However, if your application completely depends upon a special software for Windows, then you are out of luck. In general it is impossible to convert code at binary level between Windows and any of UNIX platforms.

My application runs on Windows HPC clusters, can I run it on SHARCNET clusters?

If your application does not use any Windows specific APIs then it should be able to recompile and run on SHARCNET UNIX/Linux based clusters.

My program needs to run for more than seven (7) days but user certification limits caps me at seven days of run-time; what can I do?

Although there is a higher level of user certification from the default (User1), this only affects how many processors you can consume simultaneously; the seven day run-time limit cannot be exceeded through higher levels of certification as all SHARCNET queues are globally capped at seven (7) days of run-time. This is done to primarily encourage the practice of checkpointing, but it also prevents users from monopolizing large amounts of resources outside of dedicated allocations with long running jobs, ensures that jobs free up nodes often enough for the scheduler to start large jobs in a modest amount of time, and allows us to drain all systems for maintenance within a reasonable time-frame.

In order to run a program that requires more than this amount of wall-clock time, you will have to make use of a checkpoint/restart mechanism so that the program can periodically save its state and be resubmitted to the queues, picking up from where it left off. It is crucial to store checkpoints so that one can avoid lengthy delays in obtaining results in the event of a failure. Investing time in testing and ensuring that one's checkpoint/resume works properly is inconvenient but ensures that valuable time and electricity are not wasted unduly in the long run. Redoing a long calculation is expensive.

Handling long jobs with chained job submission

Once you have ensured that your job can automatically resume from a checkpoint the best way conduct long simulations is to submit a chain of jobs, such that each subsequent job depends on the jobid before it. This will minimize the time your subsequent jobs will wait to run.

This can be done with the sqsub -w flag, eg.

    -w|--waitfor=jobid[,jobid...]]
                   wait for a list of jobs to complete

For example, consider the following instance where we want job #2 to start after job #1. We first submit job #1:

[snuser@bul131 ~]$ sqsub -r 10m -o chain.test hostname
WARNING: no memory requirement defined; assuming 1GB
submitted as jobid 5648719

Now when we submit job #2 we specify the jobid from the first job:

[snuser@bul131 ~]$ sqsub -r 10m -w 5648719 -o chain.test hostname
WARNING: no memory requirement defined; assuming 1GB
submitted as jobid 5648720

Now you can see that two jobs are queued, and one is in state "*Q" - meaning that it has conditions:

[snuser@bul131 ~]$ sqjobs
  jobid  queue state ncpus nodes time command
------- ------ ----- ----- ----- ---- -------
5648719 serial     Q     1     -  15s hostname
5648720 serial    *Q     1     -   2s hostname
2232 CPUs total, 1607 busy; 1559 jobs running; 1 suspended, 6762 queued.
403 nodes allocated; 154 drain/offline, 558 total.

Looking at the second job in detail we see that it will not start until the first job has completed with an "afterok" status:

[snuser@bul131 ~]$ qstat -f 5648720 | grep -i depend
    depend = afterok:5648719.krasched@krasched 
    -N hostname -l pvmem=1024m -m n -W depend=afterok:5648719 -l walltime=

In this fashion it is possible to string many jobs together. The second job (5648720) should continue to accrue priority in the queue while the first job is running, so once the first job completes the second job should start much more quickly than if it were submitted after the first job completed.

How much long command can I enter when using sqsub ?

You may want to submit a very long command line for sqsub like

[hahn@hnd19 ~]$ sqsub -r1 --mpp 200m -o out.hnd echo 12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890  1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890 
WARNING: jobname too long: truncated to 200 characters. 
submitted as jobid 6875982

which is, as it says, merely a warning, but the job is obviously submitted (and in this case worked fine.) there's nothing particularly dangerous about the jobname being truncated - it's an unfortunate limit of the underlying scheduler, but the loss is only that we can't fully re-create the job command as it's stored in our central DB. the way sqsub is implemented means that the actual command (rather than the jobname) is handled without any real limit. (well, typically 255 for file/dir name components, 4k max, and >= 128k for commands.)

How can I know when my job would start?

You can also now see a "best case" start time for your job with the showstart command, eg.

[merz@saw-login1 ~]$ showstart 374512 
job 374512 requires 8 procs for 1:00:00:00 
Earliest start in 8:49:18 on Mon Oct 31 18:26:57 
Earliest completion in 1:08:49:18 on Tue Nov 1 18:26:57 
Best Partition: DEFAULT

Keep in mind this is "Earliest" though - other situations may arise (eg. someone submitting higher priority jobs) that will further delay the start of your job.

How do I checkpoint/restart my program?

Checkpointing is a valuable strategy that minimizes the loss of valuable compute time should a long running job be unexpectedly killed by a power outage, node failure, hitting its runtime limit including needing more than (7) days (the maximum for all SHARCNET queues). Assuming the code is serial or multi-threaded (*not* MPI), you can use Berkeley Labs Checkpoint Restart software, BLCR. Documentation and usage instructions can be found on SHARCNET's BLCR software page. Note that BLCR requires your program to use shared ibraries (not be statically compiled).

If your program is MPI based (or any other type of program requiring a specialized job starter to get it running), it will have to be coded specifically to save state and restart from that state on its own. Please check the documentation that accompanies any software you are using to see what support it has for checkpointing. If the code has been written from scratch, you will need to build checkpointing functionality into it yourself---output all relevant parameters and state such that the program can be subsequently restarted, reading in those saved values and picking up where it left off.

How do I run a program remotely?

It is also possible to specify a command to run on the end of a ssh command. A command like ssh narwhal.sharcnet.ca sqjobs, however, will not work because ssh does not setup a full environment by default. In order to get the same environment you get as when you login, it is necessary to run the command under bash in login mode.

myhost$ ssh narwhal.sharcnet.ca bash -l -c sqjobs

If you wish to specify a command longer than a single word, it is necessary to quote it as the bash -c only takes a single argument. In order to pass these quotes through to ssh, however, it is necessary to escape them. Otherwise the local shell will interpret them and strip them off. An example is

myhost$ ssh narwhal.sharcnet.ca bash -l -c \' sqsub -r 5h ./myjob \'

Most problems with these commands are related to the local shell interpreting things that you wish to pass through to the remote side (e.g., stripping out any unescaped quotes). Use -v with ssh and set -x with bash to see what command(s) ssh and bash are executing respectively.

myhost$ ssh -v narwhal.sharcnet.ca bash -l -c \' sqsub -r 5h ./myjob \'
myhost$ ssh narwhal.sharcnet.ca bash -l -c \' set -x\; sqsub -r 5h ./myjob \'

Is package X preinstalled on system Y, and, if so, how do I run it?

The list of packages that SHARCNET has preinstalled on the various clusters, along with instructions on how to use them, can be found on the SHARCNET software page.

In the software page a package is available sometimes as the default, sometimes as a module. What is the difference?

We have implemented the Modules system for all supported software packages on our clusters - each version of each software package that we have installed can be dynamically loaded or unloaded in your user environment with the module command.

See Configuring your software environment with Modules for further information, including examples.

What is the batch job scheduling environment SQ?

SQ is a unified frontend for submitting jobs on SHARCNET, intended to hide unnecessary differences in how the clusters are configured. On clusters which are based on RMS, LSF+RMS, or Torque+Maui, SQ is just a thin shell of scripting over the native commands. On Wobbie, the native queuing system is called SQ.

To submit a job, you use sqsub:

sqsub -n 16 -q mpi -r 5h ./foo

This submits foo as an MPI command on 16 processors with a 5 hour runtime limit (make sure to be somewhat conservative with the runtime limit as a job may run for longer than expected due to interference from other jobs). You can control input, output and error output using these flags:

sqsub -o outfile -i infile -e errfile -r 5h ./foo

this will run foo with its input coming from a file named infile, its standard output going to a file named outfile, and its error output going to a file named errfile. Note that using these flags is preferred over shell redirection, since the flags permit your program to do IO directly to the file, rather than having the IO transported over sockets, then to a file.

For threaded applications (which use Pthreads, OpenMP, or fork-based parallelism), do this:

sqsub -q threaded -n 2 -r 5h -o outfile ./foo

For serial jobs

sqsub -r 5h -o outfile ./foo

How do I check running jobs and control jobs under SQ?

To show your jobs, use "sqjobs". by default, it will show you only your own jobs. with "-a" or "-u all", it will show all users. similarly, "-u someuser" will show jobs only for this particular user.

the "state" listed for a job is one of the following:

  • Q - queued
  • R - running
  • Z - suspended (sleeping)
  • D - done (shown briefly on some systems)
  •  ? - unknown (something is wrong, such as a node crashing)

times shown are the amount of time since being submitted (for queued jobs) or starting (for all others).

To kill, suspend or resume your jobs, use sqkill/suspend/resume with the job ID as shown by sqjobs.


How do I translate my LSF command to SQ?

SQ very strongly resembles LSF commands such as bsub. For instance, here are two versions, the first assuming LSF, the second using SQ:

bsub -q mpi -n 16 -o term.out ./ParTISUN
sqsub -q mpi -n 16 -o term.out ./ParTISUN

There are some differences:

  • SQ doesn't have static queues like LSF. Instead, the "-q" simply describes the kind of job - MPI(parallel), threaded or serial. "test" is considered a modifier of the job type.
  • sqjobs is similar to bjobs.
  • sqkill/suspend/resume are similar to bkill/suspend/resume.

How can I submit jobs that will run where ever there are free cpus?

We are working on a new mechanism to provide this capability.

Command 'top' gives me two different memory size (virt, res). What is the difference between 'virtual' and 'real' memory?

'virt' refers to the total virtual address space of the process, including virtual space that has been allocated but never actually instantiated, including memory which was instantiated but has been swapped out, and memory which may be shared. 'res' is memory which is actually resident - that is, instantiated with real ram pages. resident memory is normally the more meaningful value, since it may be judged relative to the memory available on the node. (recognizing, of course, that the memory on a node must be divided among the resident pages for all the processes, so an individual thread must always strive to keep its working set a little smaller than the node's total memory divided by the number of processors.)

there are two cases where the virtual address space size is significant. one is when the process is thrashing - that is, has a working set size bigger than available memory. such a process will spend a lot of time in 'D' state, since it's waiting for pages to be swapped in or out. a node on which this is happening will have a substantial paging rate expressed in the 'si' column of output from vmstat (the 'so' column is normally less significant, since si/so do not necessarily balance.)

the second condition where virtual size matters is that the kernel does not implement RLIMIT_RSS, but does enforce RLIMIT_AS (virtual size). we intend to enforce a sanity-check RLIMIT_AS, and in some cases do. the goal is to avoid a node becoming unusable or crashing when a job uses too much memory. current settings are very conservative, though - 150% of physical memory.

in this particular case, the huge V size relative to R is almost certainly due to the way Silky implements MPI using shared memory. such memory is counted as part of every process involved, but obviously does not mean that N * 26.2 GB of ram is in use.

in this case, the real memory footprint of the MPI rank is 1.2 GB - if you ran the same code on another cluster which didn't have numalink shared memory, both resident and virtual sizes would be about that much. since most of our clusters have at least 2GB per core, this code could run comfortably on other clusters.

Can I use a script to compile and run programs?

Yes. For instance, suppose you have a number of source files main.f, sub1.f, sub2.f, ..., subN.f, to compile these source code to generate an executable myprog, it's likely that you will type the following command

f77 main.f sub1.f sub2.f ... sub N.f -llapack -o myprog 

Here, the -o option specifies the executable name myprog rather than the default a.out and the option -llapack at the end tells the compiler to link your program against the LAPACK library, if LAPACK routines are called in your program. If you have long list of files, typing the above command every time can be really annoying. You can instead put the command in a file, say, mycomp, then make mycomp executable by typing the following command

chmod +x mycomp

Then you can just type

./mycomp

at the command line to compile your program.

This is a simple way to minimize typing, but it may wind up recompiling code which has not changed. A widely used improvement, especially for larger/many source files, is to use make. make permits recompilation of only those source files which have changed since last compilation, minimizing the time spent waiting for the compiler. On the other hand, compilers will often produce faster code if they're given all the sources at once (as above).

I get errors trying to redirect input into my program when submitted to the queues, but it runs fine if run interactively

The standard method to attach a file as the input to a program when submitting to SHARCNET queues is to use the -i flag to sqsub, e.g.:

sqsub -q serial -i inputfile.txt ...

Occasionally you will encounter a situation where this approach appears not to work, and your program fails to run successfully (reasons for which can be very subtle). Here is an example of one such message that was being generated by a FORTRAN program:

lib-4001 : UNRECOVERABLE library error 
    A READ operation tried to read past the end-of-file.

Encountered during a list-directed READ from unit 5 
Fortran unit 5 is connected to a sequential formatted text file 
    (standard input). 
/opt/sharcnet/sharcnet-lsf/bin/sn_job_starter.sh: line 75: 25730 Aborted (core dumped) "$@"

yet if run on the command line, using standard shell redirection, it works fine, e.g.:

program < inputfile.txt

Rather than struggle with this issue, there is an easy workaround: instead of submitting the program directly, submit a script that takes the name of the file for input redirection as an argument, and have that script launch your program making use of shell redirection. This circumvents whatever issue the scheduler is having by not having to do the redirection of the input via the submission command. The following shell script will do this (you can copy this directly into a text file and save it to disk; the name of the file is arbitrary but we'll assume it to be exe_wrapper.sh).

Bash Shell script: exe_wrapper.sh
#!/bin/bash
 
EXENAME=replace_with_name_of_real_executable_program
 
if (( $# != 1 )); then
        echo "ERROR: incorrect invocation of script"
        echo "usage: ./exe_wrapper.sh <input_file>"
        exit 1
fi
 
./${EXENAME} < ${1}

Note that you must edit the EXENAME variable to reference the name of the actual executable, and can be easily modified to take or provide additional arguments to the program being executed as desired. Ensure the script is executable by running chmod +x exe_wrapper.sh. You can now submit the job by submitting the *script*, with a single argument being the file to be used as input, i.e:

sqsub -q serial -r 5h -o outputfile.log ./exe_wrapper.sh intputfile.txt

This will result in the job being run on a compute node as if you had typed:

./program < inputfile.txt

NOTE: this workaround, as provided, will only work for serial programs, but can be modified to work with MPI jobs by further leveraging the --nompirun option to the scheduler, and launching the parallel job within the script using mpirun directly. This is explained below.

How do I submit an MPI job such that it doesn't automatically execute mpirun?

This can be done by using the --nompirun flag when submitting your job with sqsub. By default, MPI jobs submitted via sqsub -q mpi are expected to be MPI programs, and the system automatically launches your program with mpirun. While this is convenient in most cases, some users may want to implement pre or post processing for their jobs, in which case they may want to encapsulate their MPI job in a shell script.

Using --nompirun means that you have to take responsibility for providing the correct MPI launch mechanism, which depends on the scheduler as well as the MPI library in use. You can actually see what the system default is by running sqsub -vd ....

system mpi launch prefix
older /opt/hpmpi/bin/mpirun -srun
most /opt/sharcnet/openmpi/VERSION/COMPILER/bin/mpirun

NOTE: VERSION is the version number, COMPILER is the compiler used to compile the library, eg. /opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun

Some of our older systems (eg. Requin running XC, LSF-slurm and HP-MPI) use the first form. Our newer-generation systems (which include Saw, Orca, Hound, Goblin, Angel, Brown and others) are based on Centos, Torque/Maui/Moab and OpenMPI.

The basic idea is that you'd write a shell script (eg. named mpi_job_wrapper.x) to do some actions surrounding your actual MPI job (using requin as an example here):

#!/bin/bash
echo "hello this could be any pre-processing commands"
/opt/hpmpi/bin/mpirun -srun ./mpi_job.x
echo "hello this could be any post-processing commands"

You would then make this script executable with:

chmod +x mpi_job_wrapper.x

and submit this to run on 4 cpus for 7 days with job output sent to wrapper_job.out:

sqsub -r 7d -q mpi -n 4 --nompirun -o wrapper_job.out ./mpi_job_wrapper.x

now you would see the following output in ./wrapper_job.out:

hello this could be any pre-processing commands
<any output from the MPI job>
hello this could be any post-processing commands

On newer clusters (e.g., orca), due to the extreme spread of memory and cores across sockets/dies, getting good performance requires binding your processes to cores so they don't wander away for the local resource they start using. The mpirun flags --bind-to-core and --cpus-per-proc are for this. If sqsub -vd ... shows these flags, make sure to duplicate them in your own scripts. If it does not show them, do not use them. They require special scheduler support, and without this, your process will windup bound to cores other jobs are using

There are a number of reasons NOT to use your own scripts as well: with --nompirun, your job will have allocated a number of cpus, but the non-MPI portions of your script will run serially. This wastes cycles on all but one of the processors - a serious concern for long serial sections and/or jobs with many cpus. "sqsub --waitfor" provides a potentially more efficient mechanism for chaining jobs together, since it permits a hypothetical serial post-processing step to allocate only a single CPU.

But this also brings up another use-case: your --nompirun script might also consist of multiple MPI sub-jobs. For instance, you may have chosen to break up your workflow into two separate MPI programs, and want to run them successively. You can do this with such a script, including possible adjustments, perhaps to output files, between the two MPI programs. Some of our users have done iterative MPI jobs this way, were an MPI program is run, then its outputs massaged or adjusted, and the MPI program run again. Strictly speaking, you can do whatever you want with the resources you allocate as part of a job - multiple MPI subjobs, serial sections, etc.

Some cases need to know the allocated node names and the number of cpus on the node in order to construct its own hostfile or so. This is possible by using '$LSB_MCPU_HOSTS' environment variable. You may insert lines below into your bash script

echo $LSB_MCPU_HOSTS 
arr=($LSB_MCPU_HOSTS)
echo "Hostname= ${arr[0]}"
echo "# of cpus= ${arr[1]}"

Then, you may see

bru2 4
Hostname= bru2
# of cpus= 4

in your output file. Utilizing this, you can construct your own hostfile whenever you submit your job.

The following example shows a job wrapper script (eg. ./mpi_job_wrapper.x ) that translates an LSF job layout to an OpenMPI hostfile, and launches the job on the nodes in a round robin fashion:

 #!/bin/bash
 echo 'hosts:' $LSB_MCPU_HOSTS
 arr=($LSB_MCPU_HOSTS)
 if [ -e ./hostfile.$$ ]
 then
                 rm -f ./hostfile.$$
 fi
 for (( i = 0 ; i < ${#arr[@]}-1 ; i=i+2 ))
 do
                 echo ${arr[$i]} slots=${arr[$i+1]} >> ./hostfile.$$
 done
 /opt/sharcnet/openmpi/current/intel/bin/mpirun -np 2 -hostfile ./hostfile.$$ -bynode ./a.out

Note that one would still have to set the desired number of process in the final line (in this case it's only set to 2). This could serve as a framework for developing more complicated job wrapper scripts for OpenMPI on the XC systems.

If you are having issues with using --nompirun we recommend that you submit a problem ticket so that staff can help you figure out how it should be utilized on the particular system you are using.

How do I submit a large number of jobs with a script?

There are two methods: you can pack a large number of runs into a single submitted job, or you can use a script to submit a large number of jobs to the scheduler.

With the first method, you would write a shell script (let us call it start.sh) similar to the one found above. On requin with the older HP-MPI it would be something like this:

#!/bin/csh
/opt/hpmpi/bin/mpirun -srun ./mpiRun1 inputFile1
/opt/hpmpi/bin/mpirun -srun ./mpiRun2 inputFile2
/opt/hpmpi/bin/mpirun -srun ./mpiRun3 inputFile3
echo Job finishes at `date`.
exit

On orca with OpenMPI the script would be (note that the number of processors should match whatever you specify with sqsub):

#!/bin/bash
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun1
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun2
/opt/sharcnet/openmpi/1.6.2/intel/bin/mpirun -np 4 --machinefile $PBS_NODEFILE ./mpiRun3

Then you can submit it with:

sqsub -r 7d -q mpi -n 4 --nompirun -o outputFile ./start.sh

Your mpi runs (mpiRun1, mpiRun2, mpiRun3) will run one at a time, using all available processors within the job's allocation, i.e. whatever you specify with the -n option in sqsub. Please be aware of the total execution time for all runs, as with a large number of jobs it can easily exceed the maximum allowed 7 days, in which case the remaining runs will never start.

With the second method, your script would contain sqsub inside it. This approach is described in Serial / parallel farming (or throughput computing).

How do I submit a job to run on a specific node?

Sometimes there is a need to submit a job to a specific node(s), e.g. if a cluster has a variety of interconnects, and you want to run your job on a specific interconnect which is wired to specific nodes. Then you would use a command

sqsub -q mpi -r 5m -n 16 -N 2 --nodes=saw[18-19] -o output.log ./yourExecutable

where the range of nodes is specified with --nodes, and the number of cores (-n) should match the number of nodes (-N). For example, on saw there are 8 cores per node. If you want to submit your job to one specific node, then the command would be

sqsub -q threaded -r 5m -n 8 -N 1 --nodes=saw18 -o output.log ./yourExecutable

Note that the nodes you want might not be available for a while, so the job is likely to wait longer in the queue.

I have a program that runs on my workstation, how can I have it run in parallel?

If the the program was written without parallelism in mind, then there is very little that you can do to run it automatically in parallel. Some compilers are able to translate some serial portion of a program , such as loops, into equivalent parallel code, which allows you to explore the potential architecture found mostly in symmetric multiprocessing (SMP) systems. Also, some libraries are able to use parallelism internally, without any change in the user's program. For this to work, your program needs to spend most of its time in the library, of course - the parallel library doesn't speed up your program itself. Examples of this include threaded linear algebra and FFT libraries.

However, to gain the true parallelism and scalability, you will need to either rewrite the code using the message passing interface (MPI) library or annotate your program using OpenMP directives. We will be happy to help you parallelize your code if you wish. (Note that OpenMP is inherently limited by the size of a single node or SMP machine - most SHARCNET resources

Also, the preceding answer pertains only to the idea of running a single program faster using parallelism. Often, you might want to run many different configurations of your program, differing only in a set of input parameters. This is common when doing Monte Carlo simulation, for instance. It's usually best to start out doing this as a series of independent serial jobs. It is possible to implement this kind of loosely-coupled parallelism using MPI, but often less efficient and more difficult.

How can I have a quick test run of my program?

Debugging and development often require the ability to quickly test your program repeatedly. At SHARCNET we facilitate this work by providing a pre-emptive testing queue on some of our clusters, and a set of interactive development nodes on the larger clusters.

The test queue is highly recommended for most test cases as it is convenient and prepares one for eventually working in the production environment. Unfortunately the test queue is only available on Requin, Goblin and Kraken. Development nodes allow users to work interactively with their program outside of the job scheduling system and production environment, but we only set aside a limited number of them on the larger clusters. The rest of this section will only address the test queue, for more information on development nodes see the Kraken, Orca or Saw cluster pages.

The test queue allows one to quickly test their program in the job environment to ensure that the job will start properly, and can be useful for debugging. It also has the benefit that it will allow you to debug any size of job. Do not abuse the test queue as it will have an impact on your fairshare job scheduling priority and it has to interrupt other user's production jobs temporarily, slowing down other users of the system.

Note that the flag for submitting to the test queue is provided in addition to the regular queue selection flag. If you are submitting a MPI job to the test queue, both -q mpi and -t should be provided. If you omit the -q flag, you may get odd errors about libraries not being found, as without knowing the type of job, the system simply doesn't know how to start your program correctly.

To perform a test run, use sqsub option --test or -t. For example, if you have an MPI program mytest that uses 8 processors, you may use the following command

sqsub --test -q mpi -n 8 -o mytest.log ./mytest

The only difference here is the addition of the "--test" flag (note -q appears as would be normal for the job). The scheduler will normally start such test jobs within a few seconds.

The main purpose of the test queue is quickly verify the startup of a changed job - just to test that for a real, production run, it won't hit a bug shortly after starting due to, for instance, missing parameters.

The "test queue" only allows a job to run for a short period of time (currently 1 hour), therefore you must make sure that your test run will not take longer than this to finish. Only one test job may be run at a time. In addition, the system monitors the user submissions and decreases the priority of submitted jobs over time within an internally defined time window. Hence if you keep submitting jobs as test runs, the waiting time before those jobs get started will be getting longer, or you will not be able to submit test jobs any more. Test jobs are treated as "costing" four times as much as normal jobs.

Which system should I choose?

There are many clusters, many of them specialized in some way. We provide an interactive map of SHARCNET systems on the web portal which visually presents a variety of criteria as a decision making aid. In brief however, depending on the nature of your jobs, there may be a clear preference for which cluster is most appropriate:

is your job serial?
Kraken is probably the right choice, since it has a very large number of processors, and consequently has high throughput. Your job will probably run soonest if you submit it here.
do you use a lot of memory?
Orca or Hound is probably the right choice.
does your MPI program utilize a lot of communication?
Orca, Saw, Requin, Hound, Angel, Monk and Brown have the fastest networks, but it's worth trying Kraken if you aren't familiar with the specific differences between Quadrics, Myrinet and Infiniband.
does your job (or set of jobs) do a lot of disk IO?
you probably want to stick to one of the major clusters (Orca/Requin/Saw) which have bigger and much faster (parallel) filesystems.

Where can I find available resources?

The information about available computational resources are available to the public on SHARCNET web at: our systems page and our cluster performance page.

The change of status of each system, such as down time, power outage, etc is announced through the following three different channels:

  • Web links under systems. You need to check the web site from time to time in order to catch such public announcements.
  • System notice mailing list. This is the passive way of being informed. You receive the notices in e-mail as soon as they are announced. But some people might feel it is annoying to be informed. Also, such notices may be buried in dozens or hundreds of other e-mail messages in your mail box, hence are easily ignored.
  • SHARCNET RSS broadcasting. A good analogy of RSS is like traffic information on the radio. When you are on a road trip and you want to know what the traffic conditions are ahead, you turn on the car radio, tune-in to a traffic news station and listen to updates periodically. Similarly, if you want to know the status of SHARCNET systems or the latest SHARCNET news, events and workshops, you can turn to RSS feeds on your desktop computer.

The following feeds SHARCNET RSS feeds are available :

The term RSS may stand for Really Simple Syndication, RDF Site Summary, or Rich Site Summary depending on the version. Written in the format of XML, RSS feeds are used by websites to syndicate their content. RSS feeds allow you to read through the news you want, at your own convenience. The messages will show up on your desktop, e.g. using Mozilla Thunderbird, an integrated mail client software, as soon as there is an update. If you have a Gmail, a convenient RSS access option may be Google Reader

Can I find my job submission history?

Yes. Your every single job submission is recorded in a database. Each record contains the command, the submission time, the start time, the completion time, exit status of your program (i.e. succeeded or failed), number of CPUs used, system, and so on.

You may review the history by logging in to your web account.

How many jobs can I submit in one cluster?

max_user_queable=1000

This means 1000 jobs max, either running or queued. Once jobs finish running, they can submit more upto the max again.

How are jobs scheduled?

Job scheduling is the mechanism which selects waiting jobs ("queued") to be started ("dispatched") on nodes in the cluster. On all of the major SHARCNET production clusters, resources are "exclusively" scheduled so that a job will have complete access to the CPUs, GPUs or memory that it is currently running on (it may be pre-empted during the course of it's execution, as noted below). Details as to how jobs are scheduled follow below.

How long will it take for my queued job to start?

In practice, if your potential job does not cause you to exceed your user certification per-user process limit and there are enough free resources to satisfy the processor and memory layout you've requested for your job, and no one else has any jobs queued, then you should expect your jobs to start immediately. Once there are more jobs queued than available resources, the scheduler will attempt to arbitrate between the resource (CPU, memory, walltime) demands of all queued jobs. This arbitration happens in the following order: Dedicated Resource jobs first, then "test" jobs (which may also preempt normal jobs), and finally normal jobs. Within the set of pending normal jobs, the scheduler will prefer jobs belonging to groups which have high Fairshare priority (see below).

For information on expected queue wait times, users can check the Recent Cluster Statistics table in the web portal. This is historical data and may not correspond to the current job load on the cluster, but it is useful for identifying longer-term trends. The idea is that if you are waiting unduly long on a particular cluster for your jobs to start, you may be able to find another similar cluster where the waittime is shorter.

Although it is not possible to predict the start time of queued job with much accuracy there are some tools that can be used while logged into the systems that can help estimate a relevant wait time range for your specific jobs.

First of all it is important to gather information about the current state of the scheduling queue. By exploring the currently running and queued jobs in the queue you can get a general picture of how busy the system is. With these tools it is also possible to get a more specific picture of queue times for jobs that are similar to your jobs in terms of resource requests. Because the resource requests of a job play a major role in dictating its wait time it is important to base queue time estimates on jobs that have similar requests.

The program showq can be used to view the jobs that are currently running and queued on many system:

$ showq

active jobs------------------------
JOBID              USERNAME      STATE PROCS   REMAINING            STARTTIME
... 

For more detailed information about the queued jobs use

$ showq -i

eligible jobs----------------------
JOBID                 PRIORITY  XFACTOR  Q  USERNAME    GROUP  PROCS     WCLIMIT     CLASS      SYSTEMQUEUETIME
...

A more general listing of queue information can also be obtained using qstat as follows:

$ qstat
Job id                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
...


Once that the queue has been explored, further details about specific jobs can be obtained to provide more information in the task of estimating queue time. In many instances it is useful to filter the jobs displayed to only show jobs with specific characteristics that relate to a job type of interest. For instance, all of the queued mpi jobs can be listed by calling:

$ sqjobs -aaq --queue mpi
  jobid     user queue state ncpus   time command
------- -------- ----- ----- ----- ------ -------
...

Note that the --queue option to sqjobs , beyond filtering to the standard serial, threaded, mpi and gpu queues, can also filter the output for jobs in specific NRAP queues. This can be particularly important information in the task of managing use within resource allocation projects.

Once that specific jobs have been identified in the queue that share resource requests with the type of job that you would like to get queue time estimates for (e.g. 32 process mpi job) you can obtain more details about specific jobs by calling:

$ sqjobs -l [jobid]
key                value
------------------ -----
jobid:             ...
queue:             ...
ncpus:             ...
nodes:             ...
command:           ...
working directory: ...
out file:          ...
state:             ...
submitted:         ...
started:           ...
should end:        ...
elapsed:           ...
cpu time:          ...
virtual memory:    ...
real/virt mem:     ...
 

  jobid     user queue state ncpus   time command
------- -------- ----- ----- ----- ------ -------
...

... or further calling:

$ qstat -f [jobid]
Job Id: ...
   Job_Name = ...
   Job_Owner = ...
   resources_used.cput = ...
   resources_used.mem = ...
   resources_used.vmem = ...
   resources_used.walltime = ...
   job_state = ...
   queue = ...
   server = ...
   Account_Name = ...
   Checkpoint = ..
   ctime = ...
   Error_Path = ...
   exec_host = ...
   Hold_Types = ...
   Join_Path = ...
   Keep_Files = ...
   Mail_Points = ...
   mtime = ...
   Output_Path = ...
   Priority = ...
   qtime = ...
   Rerunable = ...
   Resource_List.cput = ...
   Resource_List.procs = ...
   Resource_List.pvmem = ...
   Resource_List.walltime = ...
   session_id = ...
   Shell_Path_List = ...
   etime = ...
   submit_args = ...
   start_time = ...
   Walltime.Remaining = ...
   start_count = ...
   fault_tolerant = ...
   submit_host = ...
   init_work_dir = ...


Even though there is rich information to use from the scheduling queue to use towards building estimates of future job wait times there is no way estimate queue wait times with certainty as the scheduling queue if a very dynamic process in which influential properties change on every scheduling cycle. Further there are many parameters to consider not only of the jobs currently queued and running but also on thr priority ranking of the submitting user and group.

Another way to minimize your queue waittime is to submit smaller jobs. Typically it is harder for the scheduler to free up resources for larger jobs (in terms of number of cpus, number of nodes, and memory per process), and as such smaller jobs do not wait as long in the queue. The best approach is to measure the scaling efficiency of your code to find the sweet spot where your job finishes in a reasonable amount of time but waits for the least amount of time in the queue. Please see this tutorial for more information on parallel scaling performance and how to measure it effectively.

What determines my job priority relative to other groups?

The priority of different jobs on the systems is ranked according to the usage by the entire group, across SHARCNET. This system is called Fairshare.

Fairshare is based on a measure of recent (currently, past 2 months) resource usage. All user groups are ranked into 5 priority levels, with the heaviest users given lowest priority. You can examine your group's recent usage and priority here: Research Group's Usage and Priority.

This system exists to allow for new and/or light users to get their jobs running without having to wait in the queue while more resource consuming groups monopolize the systems.

Why did my job get suspended?

Sometimes your job may appear to be in a running state, yet nothing is happening and it isn't producing the expected output. In this case the job has probably been suspended to allow another job to run in it's place briefly.

Jobs are sometimes preempted (put into a suspended state) if another higher-priority job must be started. Normally, preemption happens only for "test" jobs, which are fairly short (always less than 1 hour). After being preempted, a job will be automatically resumed (and the intervening period is not counted as usage.)

On contributed systems, the PI who contributed equipment and their group have high-priority access and their jobs will preempt non-contributor jobs if there are no free processors.

My job cannot allocate memory

The default memory is usually 2G on most clusters. If your job requires more memory and is failing with a message "Cannot allocate memory", you should try adding the ""--mpp=4g" flag to your sqsub command, with the value (in this case 4g - 4 gigabytes) set large enough to accommodate your job.

Memory is a limited resource, so jobs requesting more memory will likely wait longer in the queue before running. Hence, it is to the user's advantage to provide an accurate estimate of the memory needed.

Let us say your matlab program is called main.exe, and that you'd like to log your output in main.out ; to submit this job for 5 hours you'd use sqsub like:

sqsub -o main.out -r 5h ./main.exe

By default it will be attributed an amount of memory dependent on which system you are using (1GB on orca). To increase the amount of memory to 2GB, for example, add "--mpp=2G":

sqsub --mpp=2G -o main.out -r 5h ./main.exe

If that still doesn't work you can try increasing it further.

Furthermore, you can change the requested memory for a queued job with the command qalter (in this example to 5 GB):

qalter -l pvmem=5160m jobID

where jobID would be replaced by the actual ID of a job.

Some specific scheduling idiosyncrasies:

One problem with cluster scheduling is that for a typical mix of job types (serial, threaded, various-sized MPI), the scheduler will rarely accumulate enough free CPUs at once to start any larger job. When an job completes, it frees N cpus. If there's an N-cpu job queued (and of appropriate priority), it'll be run. Frequently, jobs smaller than N will start instead. This may still give 100% utilization, but each of those jobs will complete, probably at different times, effectively fragmenting the N into several smaller sets. Only a period of idleness (lack of queued smaller jobs) will allow enough cpus to collect to let larger jobs run.

Requin is intended to enable "capability", or very large jobs. Rather than eliminating the ability to run more modest job sizes, Requin is configured with a weekly cycle: every Monday at noon, all previously running jobs will have finished and large queued jobs can start. One implication of this is that no job over 1 week can be run (and a 1-week job will only have one chance per week to start). Shorter jobs can be started at any time, but only a 1-day job can be started on Sunday, for instance.

Note that all clusters now enforce runtime limits - if the job is still running at the end of the stated limit, it will be terminated. Note also that when a job is suspended (preempted), this runtime clock stops: suspended time doesn't count, so it really is a limit on "time spent running", not elapsed/wallclock time.

Finally, when running DDT or OPT (debugger and profiler), it's normal to use the test queue. If you need to run such jobs longer than 1 hour, and find the wait times too high when using the normal queues, let us know (open a ticket). It may be that we need to provide a special queue for these uses - possibly preemptive like the test queue.

How do I run the same command on multiple clusters simultaneously?

If you're using bash and can login to sharcnet with authentication agent connection forwarding (the -A flag; ie. you've set up ssh keys; see Choosing_A_Password#Use_SSH_Keys_Instead.21 for a starting point) add the following environment variable and function to your ~/.bashrc shell configuration file:

~/.bashrc configuration: multiple cluster command
export SN_CLUSTERS="goblin kraken mako orca requin saw"
 
function clusterExec {
  for clus in $SN_CLUSTERS; do
     ping -q -w 1 $clus &> /dev/null
     if [ $? = "0" ]; then echo ">>> "$clus":"; echo ""; ssh $clus ". ~/.bashrc; $1"; else echo ">>> "$clus down; echo ""; fi
   done
}

You can select the relevant systems in the SN_CLUSTERS environment variable.

To use this function, reset your shell environment (ie. log out and back in again), then run:

clusterExec uptime

You will see the uptime on the cluster login nodes, otherwise the cluster will appear down.

If you have old host keys (not sure why these should change...) then you'll have to clean out your ~/.ssh/known_hosts file and repopulate it with the new keys. If you suspect a problem contact an administrator for key validation or email help@sharcnet.ca. For more information see Knowledge_Base#SSH_tells_me_SOMEONE_IS_DOING_SOMETHING_NASTY.21.3F.

How do I load different modules on different clusters?

SHARCNET provides environment variables named $CLUSTER, which is the systems hostname (without sharcnet.ca), as well as $CLU which will resolve to a three-character identifier that is unique for each system (typically the first three letters of the clusters name). You can use these in your ~/.bashrc to only load certain software on a particular system, but not others. For example, you can create a case statement in your ~/.bashrc shell configuration file based on the value of $CLUSTER :


~/.bashrc configuration: loading different modules on different systems
case $CLUSTER in
  orca)
  #load intel v11.1.069 when on orca instead of the default
  	module unload intel
  	module load intel/11.1.069 
  ;;
  mako)
  #alias vim to vi on mako, as the former isn't installed
    alias vim=vi
  ;;
  *)
    #Anything we want to end up in "other" here....
  ;;
esac

One can use $CLU as it is shorter and more convenient. Instead of a case statement one can just conditionally load or unload certain software on particular systems by inserting lines like the following in their ~/.bashrc :

~/.bashrc configuration: loading different modules on different systems
#to load gromacs only on saw: 
if [ $CLU == 'saw' ]; then module load gromacs; fi 
#to load octave on any system except saw: 
if [ $CLU != 'saw' ]; then module load octave; fi

I can't run jobs because I'm overquota?

If you exceed your /work disk quota on our systems you will be placed into a special "overquota" group and will be unable to run jobs. SHARCNET's disk monitoring system runs periodically (typically O(day)) so if you have just cleaned up your files you may have to wait until it runs again to update your quota status. One can see their current quota status from the system's point of view by running:

 quota $USER

If you can't submit jobs even after the system has updated your status it is likely because you are logged into an old shell which still shows you in the overquota unix group. Log out and back in again and then you should be able to submit jobs.

If you're cleaning up and not sure how much space you are using on a particular filesystem, then you will want to use the du command, eg.

 du -h --max-depth=1 /work/$USER

This will count space used by each directory in /work/$USER and the total space, and present it in a human-readable format.

For more detailed information please see the Using Storage article.

I can't run 'java' on SHARCNET cluster?

Due to the way memory limits are implemented on the clusters, you will need to be specifying the maximum memory allocation pool for the Java JVM at the time you invoke it.

You do this with the -XmxNNNN command-line argument, where NNNN is the desired size of the allocation pool. Note that this number should always be within any memory limits being imposed by the scheduler (on orca compute nodes, that default limit would be 1GB per process).

The login nodes are explicitly limited to 1GB of allocation for any process, so you will need to run java or javac specifying a maximum memory pool smaller than 1GB. For example:

Running it normally (as in your example, I get same error):

orc-login2:~% java 
Error occurred during initialization of VM 
Could not reserve enough space for object heap 
Could not create the Java virtual machine.

Specify small maximum memory allocation:

orc-login2:~% java -Xmx512m 
Usage: java [-options] class [args...] 
                      (to execute a class) 
      or java [-options] -jar jarfile [args...] 
                      (to execute a jar file)

where options include:

       -d32 use a 32-bit data model if available 
       ...

As you can see, explicitly limiting the memory allocation pool to 512MB here has it running as expected.

I cannot see anything using sqjobs in Kraken

Connection timed out 
pbsnodes: cannot connect to server krasched, error=110 (Connection timed out) 
ERROR: can't determine processors per node

This is a general error for all kraken users caused by the scheduler locking up with a bad node. The node needs to be rebooted.

Programming and Debugging

What is MPI?

MPI stands for Message Passing Interface, a standard for writing portable parallel programs which is well-accepted in the scientific computing community. MPI is implemented as a library of subroutines which is layered on top of a network interface. The MPI standard has provided both C/C++ and Fortran interfaces so all of these languages can use MPI. There are several MPI implementations, including OpenMPI and MPICH. Specific high-performance interconnect vendors also provide their own libraries - usually a version of MPICH layered on an interconnect-specific hardware library. For SHARCNET Alpha clusters, the interconnect is Quadrics, which provides MPI and a low-level library called "elan". for Myrinet, the low-level library is MX or GM.

For an MPI tutorial refer to MPI tutorial.

In addition to C/C++ and Fortran versions of MPI, there exist other language bindings as well. If you have any special needs, please contact us.

What is OpenMP?

OpenMP is a standard for programming shared memory systems using threads with compiler directives instrumented in the source code. It provides a higher-level approach to utilizing multiple processors within a single machine while keeping the structure of the source code as close to the conventional form as possible. OpenMP is much easier to use than the alternative (Pthreads) and thus is suitable for adding modest amounts of parallelism to pre-exiting code. Because OpenMP is a set of programs, your code can still be compiled by a serial compiler and should still behave the same.

OpenMP for C/C++ and Fortran are supported by many compilers, including the PathScale and PGI for Opterons, and the Intel compilers for IA32 and IA64 (such as SGI's Altix.). OpenMP support has been provided in the GNU compiler suite since v4.2 (OpenMP 2.5), and starting with v4.4 supports the OpenMP 3.0 standard.

How do I run an OpenMP program with multiple threads?

An OpenMP program uses a single process with multiple threads rather than multiple processes. On SMP systems, threads will be scheduled on available processors, thus run concurrently. In order for each thread to run on one processor, one needs to request the same number of CPUs as the number of threads to use. To run an OpenMP program foo that uses four threads with sqsub command, use the following

sqsub -q threaded -n 4 -r 5m ./foo

The option -n 4 specifies to reserve 4 CPUs per process.

For a basic OpenMP tutorial refer to OpenMP tutorial.

How do I measure the cpu time when running multi-threaded job?

It appears the easiest solution for you to use a simple benchmarking script, call it "time.sh" (should be made executable, "chmod u+x time.sh"):

#!/bin/bash
/usr/bin/time $*

You should place it in the same directory as you code binary, and insert "./time.sh" before the binary name in you sqsub command, e.g.

sqsub -q threaded -n8 -r2m -o out2 ./time.sh ./code code_arguments ...

I just tested it with a simple threaded application, and it does work with multiple threads. You'll get the output like this:

494.62user 0.98system 1:02.23elapsed 796%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+515minor)pagefaults 0swaps

As you can see, CPU cycles from all 8 threads were counted (796%CPU).

What mathematics libraries are available?

Every system has the basic linear algebra libraries BLAS and LAPACK installed. Normally, these interfaces are contained in vendor-tuned libraries. On Intel-based (Xeon) clusters it's probably best to use the Intel math kernel library (MKL). On Opteron-based clusters, AMD's ACML library is available. However, either library will work reasonably well on both types of systems. If one expects to do a large amount of computation, it is generally advisable to benchmark both libraries so that one selects the one offering best performance for a given problem and system.

One may also find the GNU scientific library (GSL) useful to some point for their particular needs. The GNU scientific library is an optional package, available on any machine.

For a detailed list of libraries on each clusters, please check the documentation on the corresponding SHARCNET satellite web sites

How do I use mathematics libraries such as BLAS and LAPACK routines?

First you need to know which subroutine you want to use. You need to check the references to find what routines meet your needs. Then place calls to those routines you want in your program and compile your program to use the particular libraries that have those routines. For instance, if you want compute the eigenvalues, and optionally the eigenvectors, of an N by N real non symmetric matrix in double precision, you find the LAPACK routine DGEEV will do that. All you need to do is to have a call to DGEEV, with required parameters as specified in the LAPACK document, and compile your program to link against the LAPACK library.

Now to compile the program, you need to link it to a library that contains the LAPACK routines you call in your code. On SHARCNET a number of high quality libraries are available for use to use. The general recommendation is to use Intel's MKL library, which has a module loaded by default on most SHARCNET systems. Another popular option is the ACML library. The instructions on how to link your code with these libraries at compile time are provided on the MKL page and the ACML page.


My code is written in C/C++, can I still use those libraries?

Yes. Most of the libraries have C interfaces. If you are not sure about the C interface or you need assistance in using those libraries written in Fortran, we can help you out on a case to case basis.

What packages are available?

Various packages have been installed on SHARCNET clusters at users' requests. Custom installed packages include, for example, Gaussian, PETSc, R, Gamess, VMD, and Maple, and many others. Please check the SHARCNET web portal for the software packages installed and related usage information.

What interconnects are used on SHARCNET clusters?

Currently, several different interconnects are being used on SHARCNET clusters: Quadrics, Myrinet, InfiniBand and standard IP-based ethernet.

I would like to do some grid computing, how should I proceed?

Depends on what you mean by "grid computing". If you simply mean you want to queue up a bunch of jobs (MPI, threaded or serial) and have them run without further attention, then great! SHARCNET's model is exactly that kind of grid. However, we do not attempt to hide differences between clusters, such as file systems that are remote, different types of CPUs or interconnect. We do not currently attempt to provide a single queue which feeds jobs to all of the clusters. Such a unified grid would require you to ensure that your program was compiled and configured to run under different flavors of LInux (mostly AMD64 Linux versions Centos 5, Centos 6 etc.). It would also have to assume nothing about shared file systems, and it would have to be aware of the 5000x difference in latency when sending messages within a cluster versus between clusters, as well as either rely on least-common-denominator networking (ethernet) or else explicitly manage the differences between Quadrics, Myrinet, Infiniband and ethernet.

If, however, you would like to try something "unusual" that requires much more freedom than the current resource management system can handle, then, you would need to discuss the details of your plan with us for special arrangement.

Debugging serial and parallel programs

Debugger is a program which helps to identify mistakes ("bugs") in programs - either run-time, or "post-mortem" (by analyzing the core file produced by a crashed program). Debuggers can be either command-line, or GUI (graphical user interface) based. Before a program can be debugged, it needs to be (re-)compiled with a switch, -g, which tells the compiler to include symbolic information into the executable. For MPI problems on the HP XC clusters, -ldmpi includes the HP MPI diagnostic library, which is very helpful for discovering incorrect use of the API.

SHARCNET highly recommends using our commercial debugger DDT. It has a very friendly GUI, and can also be used for debugging serial, threaded, and MPI programs. A short description of DDT and cluster availability information can be found on its software page. Please also refer to our detailed Parallel Debugging with DDT tutorial.

SHARCNET also provides gdb (installed on all clusters, type "man gdb" to get a list of options and see our Common Bugs and Debugging with gdb tutorial).


How do I kill hungup processes

Refer to this section for information on how to kill hung up processes.

What if I do not want a core dump

When you submit a batch job to the test queue it automatically will produce a core dump in the event that a Segmentation fault occurs.

This is controlled by the "ulimit -c" option. If you do not want a core dump you have to submit a script and in that script, specify

"ulimit -c 0"

For illustration purposes consider following simple program, residing in file simple.c:

#include <stdio.h>
 
main() {
    int i;
    int array[10];
 
    i = 500000000;
    printf("Index i = %d\n", i);
    printf("%d\n",array[i]);
}

We compile above program using the command:

   gcc -g simple.c

which produces the executable a.out and then submit following script to execute the job in batch mode in the test queue:

   ./sub_job


where the sub_job script file is as follows:

#!/bin/bash
 
sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./a.out

Above procedure produces an output file which start with following lines:

srun: error: nar315: task0: Segmentation fault (core dumped) srun: Terminating job


and a core dump file was produced.


In the case where the program is large the core file would also be very large and would take a lot of space and time to be dumped.

So, for those cases where you do not want or need the core file, you should submit the script sub_job_no_core_dump as follows:

    ./sub_job_no_core_dump

where sub_job_no_core_dump is the following:

#!/bin/bash
 
sqsub -t -r 1   -q serial -o ${CLU}_CODE_%J  ./ulimit_script

and where ulimit_script is another script:

#!/bin/bash
 
ulimit -c 0
./a.out

This time the output from ./sub_job_no_core_dump is as follows:

~/ulimit_script: line 4: 31097 Segmentation fault ./a.out srun: error: nar150: task0: Exited with exit code 139


and no core dump file was produced.

Note: All scripts must have the proper authorizations, which is obtained by issuing command:

     chmod ugo+rx <script_name>


What is NaN ?

NaN stands for "Not a Number". It is an undefined or unrepresentable value, typically encountered in floating point arithmitic (eg. the square root of a negative number). To debug this in your program one typically has to unmask or trap floating point exceptions. This is fairly straightforward with Fortran compilers (e.g. with the Intel's ifort one simply needs to add one switch, "-fpe0"), but somewhat more complicated with C/C++ codes, where the best solution is to use feenableexcept() function. There are further details in the Common Bugs and Debugging with gdb tutorial.

How can I use double precision for GPU variables (on cluster angel) ?

To use double precision for CUDA variables you need to add the following flag to the compile command:

-arch sm_13

For further information on using CUDA please see this tutorial / online reference.

How do I compile my MPI program with diagnostic information?

HP-MPI

This is the version of MPI used by default on the opteron clusters running the XC operating system (eg. requin). To get diagnostic info (useful for solving "MPI BUG" errors) compile your code with the additional -ldmpi flag.

What is SIGABRT error?

User is trying to use the program on very large files and program's code does not support that (integers overflow). The solution is to either split the files to be analyzed into smaller chunks, or ask the program creators to modify it so the problem is eliminated.

My program exited with an error code XXX - what does it mean?

Your application crashed, producing an error code XXX (where XXX is a number). What does it mean? The answer may depend on your application. Normally, user codes are not touching the first 130 or so error codes, which are reserved for the Operational System level error codes. On most of our clusters, typing

 perror  XXX

will print a short description of the error. (This is a MySQL utility, and for XXX>122 it will start printing only MySQL-related error messages.) The accurate for the current OS (operational system) list of system error codes can be found on our clusters by printing the content of the file /usr/include/asm-x86_64/errno.h (/usr/include/asm-generic/errno.h on some systems).

When the error code is returned by the scheduler (when your program submitted to the scheduler with "sqsub" crashes), it has a different meaning. Specifically, if the code is less or equal to 128, it is the scheduler (not application's) error. Such situations have to be reported to SHARCNET staff. Scheduler exit codes between 129 and 255 are user job error codes; one has to subtract 128 to derive the usual OS error code.

On our systems that run Torque/Maui/Moab, exit code 271 means that your program has exceeded one of the resource limits you specified when you submitted your job, typically either the runtime limit or the memory limit. One can correct this by setting a larger runtime limit with the sqsub -r flag (up to the limit allowed by the queue, typically 7 days) or by setting a larger memory limit with the sqsub --mpp flag, depending on the message that was reported in your job output file (exceeding the runtime limit will often only result in a message indicating "killed"). Note that both of these values will be assigned reasonable defaults that depend on the system and may vary from system to system. Another common exit code relating to memory exhaustion is 41 -- this may be reported by a job in the done state and should correspond with an error message in your job output file.

For your convenience, we list OS error codes below:

 1  Operation not permitted
 2  No such file or directory
 3  No such process
 4  Interrupted system call
 5  I/O error
 6  No such device or address
 7  Arg list too long
 8  Exec format error
 9  Bad file number
10  No child processes
11  Try again
12  Out of memory
13  Permission denied
14  Bad address
15  Block device required
16  Device or resource busy
17  File exists
18  Cross-device link
19  No such device
20  Not a directory
21  Is a directory
22  Invalid argument
23  File table overflow
24  Too many open files
25  Not a typewriter
26  Text file busy
27  File too large
28  No space left on device
29  Illegal seek
30  Read-only file system
31  Too many links
32  Broken pipe
33  Math argument out of domain of func
34  Math result not representable
35  Resource deadlock would occur
36  File name too long
37  No record locks available
38  Function not implemented
39  Directory not empty
40  Too many symbolic links encountered
41  (Reserved error code)
42  No message of desired type
43  Identifier removed
44  Channel number out of range
45  Level 2 not synchronized
46  Level 3 halted
47  Level 3 reset
48  Link number out of range
49  Protocol driver not attached
50  No CSI structure available
51  Level 2 halted
52  Invalid exchange
53  Invalid request descriptor
54  Exchange full
55  No anode
56  Invalid request code
57  Invalid slot
58  (Reserved error code)
59  Bad font file format
60  Device not a stream
61  No data available
62  Timer expired
63  Out of streams resources
64  Machine is not on the network
65  Package not installed
66  Object is remote
67  Link has been severed
68  Advertise error
69  Srmount error
70  Communication error on send
71  Protocol error
72  Multihop attempted
73  RFS specific error
74  Not a data message
75  Value too large for defined data type
76  Name not unique on network
77  File descriptor in bad state
78  Remote address changed
79  Can not access a needed shared library
80  Accessing a corrupted shared library
81  .lib section in a.out corrupted
82  Attempting to link in too many shared libraries
83  Cannot exec a shared library directly
84  Illegal byte sequence
85  Interrupted system call should be restarted
86  Streams pipe error
87  Too many users
88  Socket operation on non-socket
89  Destination address required
90  Message too long
91  Protocol wrong type for socket
92  Protocol not available
93  Protocol not supported
94  Socket type not supported
95  Operation not supported on transport endpoint
96  Protocol family not supported
97  Address family not supported by protocol
98  Address already in use
99  Cannot assign requested address
100 Network is down
101 Network is unreachable
102 Network dropped connection because of reset
103 Software caused connection abort
104 Connection reset by peer
105 No buffer space available
106 Transport endpoint is already connected
107 Transport endpoint is not connected
108 Cannot send after transport endpoint shutdown
109 Too many references: cannot splice
110 Connection timed out
111 Connection refused
112 Host is down
113 No route to host
114 Operation already in progress
115 Operation now in progress
116 Stale NFS file handle
117 Structure needs cleaning
118 Not a XENIX named type file
119 No XENIX semaphores available
120 Is a named type file
121 Remote I/O error
122 Quota exceeded
123 No medium found
124 Wrong medium type
125 Operation Cancelled
126 Required key not available
127 Key has expired
128 Key has been revoked
129 Key was rejected by service


Getting Help

I have encountered a problem while using a SHARCNET system and need help, who should I talk to?

If you have access to the Internet, we encourage you to use the problem ticketing system (described in detail below) through the web portal. This is the most efficient way of reporting a problem as it minimizes email traffic and will likely result in you receiving a faster response than through other channels.

You are also welcome to contact system administrators and/or high performance technical computing consultants at any time. You may find their contact information on the directory page.

How long should I expect to wait for support?

Unfortunately SHARCNET does not have adequate funding to provide support 24 hours a day, 7 days a week. User support and system monitoring is limited to regular business hours: there is no official support on weekends or holidays, or outside 9:00 - 17:00 EST .

Please note that this includes monitoring of our systems and operations, so typically when there are problems overnight or on weekends/holidays system notices will not be posted until the next business day.

SHARCNET Problem Ticket System

What is a "problem ticket system"?

This is a system that allows anyone with a SHARCNET account to start a persistent email thread that is referred to as a "problem ticket". The thread is stored indefinitely by SHARCNET and can be consulted by any SHARCNET user in the future. When a user submits a new ticket it will be brought to the attention of an appropriate and available SHARCNET staff member for resolution.

You can find the SHARCNET ticket system here, or by logging into our website and clicking on "Help" then "Problems" in the top left-hand-side menu.

How do I search for existing tickets ?

Type a meaningful string into the search box when logged into the SHARCNET web portal. You can find this text entry box beside the Go button on the top right-hand-side of the page in the web portal.

It is recommended that one use specific words when searching, for example the exit code returned in your job output, or the error message produced when attempting a command. Use of common search terms may produce too many results and this coupled with the lack of sophisticated ranking for results means your search will likely be misleading or time consuming if you have to sift through many results by hand.

What do I need to specify in a ticket ?

If you do not find any tickets that deal with you current problem (as illustrated above) then you should ensure you include the following information, if relevant, when submitting a ticket:

  1. use a concise and unique Subject for the ticket
    • this makes it easier to identify in search results, for example
  2. select sensible values for the System Name and Category drop down boxes
    • this helps guide your ticket to the right staff member as quickly as possible
  3. in the Comment text entry box:
    1. if the problem pertains to a job report the jobid associated with the job
      • this is an integer that is returned by sqsub when you submit the job
      • you can also find a listing of your recent jobs (including their jobid) in the web portal at the bottom of this page
    2. report the exact commands necessary to duplicate the problem, as well as any error output that helps identify the problem
      • if relevant, this should include how the code is compiled, how the job is submitted, and/or anything else you are doing from the command line relating to the problem
    3. if this ticket relates to another ticket, please specify the associated ticket number(s)
    4. if you'd like for a particular staff member to be aware of the ticket, mention them
  4. if you want to expedite resolution you can make your files publicly available ahead of time
    • you should include any relevant files required to duplicate the problem
    • if you're not comfortable with changing your own file permissions, in Comment you can request that a staff member provide a location where you can copy the necessary files, or arrange file transfer via other means. If your code is really sensitive you may have to arrange to meet in person to show them the problem.

How do I submit a ticket?

We recommend that you read the above section on what to specify in a ticket before submitting a new ticket.

Users can submit a problem ticket describing an issue, problem or other request and they will then receive messages concerning the ticket via email (the ticket can also be consulted via the web portal).

You can also open a ticket automatically by emailing help@sharcnet.ca with the email address associated with your SHARCNET account.

How do I give other users access to my files ?

There are two ways to provide other users with access to your files. The first is by changing the file attributes of your directories directly with the chmod command and the second is by using file access control lists (acl). Using ACLs is more flexible as it allows you to specify individual users and groups and their respective privileges, whereas using chmod is more coarse grained and only allows you to set the permissions for your group and global access. At present ACLs are only supported on the SHARCNET global work (/work) and home (/home) filesystems.

Enabling Per-user/group Access: chmod Method

Suppose you have a program and some files in:

/home/account/research/projectx

that you want to provide access to some users and/or groups.

The first step is to make the "top" directory you control have world execute permission. This will allow other users to be able to cd (change directory) into subdirectories under such. You only need to have world execute permission --world read permission is not needed. (Enabling world read permission will allow anyone to see all file and subdirectory names in that directory --so you may wish to keep / turn such off.)

The "top" directories you control on SHARCNET are any of these (where $USER is your SHARCNET userid):

  • /home/$USER
  • /work/$USER
  • /scratch/$USER

So if you want to provide access to the directory:

/home/$USER/research/projectx

you would run (once) the following command:

chmod o+x /home/$USER

or equivalently (since this is your home directory):

chmod o+x ~

If you also want to be sure others cannot see the files and subdirectories in your home directory, then add a -rw (which turns off the ability to read the directory contents or the ability to write into and delete directory contents) to the chmod command as follows:

chmod o+x-rw /home/$USER

Similarly, you can set your "top" work directory if that is where you want to provide access, like this:

chmod o+x /work/$USER

or with the added -rw as follows:

chmod o+x-rw /work/$USER

NOTE: If unsure and you want to err on the side of keeping things private, use "chmod o+x-rw DIRECTORY_NAME".

Now repeat this process for all directories in the path you want to provide access to except the last one. For example, to provide access to this projectx directory:

/home/$USER/research/projectx

you would need to run:

chmod o+x /home/$USER
chmod o+x /home/$USER/research

For the last directory, i.e., projectx, provide both read and execute permissions, and, if you want to allow others to write to that directory, also allow write permission.

To only provide read and execute permission with the last directory, run:

chmod o+rx /home/$USER/research/projectx

and to provide read, write, and execute permission, run:

chmod o+rwx /home/$USER/research/projectx

More realistically however, you would like others to be able to do either of the following:

  1. read everything in the projectx directory (and disallow others' ability to write/update/delete), or,
  2. read everything and be able to modify/write contents within the projectx directory.

To do the former (i.e., Item 1), run on the last directory (i.e., "projectx" in this example):

chmod -R o+rX-w /home/$USER/research/projectx

and to do the latter (i.e., Iterm 2), run on the last directory (i.e., "projectx" in this example):

chmod -R o+rwX /home/$USER/research/projectx

Now, you can tell the users you want to be able to access this directory its FULL PATH, i.e.,

/home/yourloginname/research/projectx

and those users will be able to run:

cd /home/yourlogginname/research/projectx

to have the access you've granted. (Don't tell the user a path with $USER in it --that won't work: you must use the full path. If you are unsure, "cd" to that directory and run the "pwd" command which will output the full path to the "present working directory".)

NOTE: The "other" permission settings in this section allow ANY other user actions permitted implied by the permissions you've set. If this is too open, then read the sections below that use the setfacl command.

Disabling Per-user/group Access: chmod Method

At some point, you will want to revoke permissions granted to others. If you had previously provided access to your "projectx" directory using these commands:

chmod o+x /home/$USER
chmod o+x /home/$USER/research
chmod -R o+rwX /home/$USER/research/projectx

then you would revoke access using:

chmod o-rwx /home/$USER
chmod o-rwx /home/$USER/research
chmod -R o-rwx /home/$USER/research/projectx

Know that this will revoke all "other" access. If you have other users using other directories under /home, then you will not want to run:

chmod o-x /home/$USER

as that will prevent those users from accessing those other directories.

Controlling Access to Files/Directories Using setfacl

An Access Control List (ACL) is a list of users and groups with their associated file access privileges which is associated with a file/directory. Using ACLs allow fine-grained control over which users and/or which groups of users can access files and/or directories.

NOTE 1: At present ACLs are only supported on the SHARCNET global work (/work) and home (/home) filesystems.

NOTE 2: If you are granting access to multiple SHARCNET staff persons, you may prefer to grant access to all SHARCNET staff at one time, e.g., you may/will be/are receiving assistance from multiple staff members. If so, you may find it much easier to grant access to the SN_staff group instead of each individual SHARCNET staff person.


Enabling Per-user/group Access: setfacl Method

Although you can use the setfacl command to grant permissions everywhere needed, it is simpler to use the chmod command to set execute permissions on your "top" directory and all directories below the "top" one first. Suppose you want to grant access to the following "projectx" directory (to everything in and under it):

/home/$USER/research/projectx

where $USER is your userid (i.e., SHARCNET login). The "top" directory is:

/home/$USER

so you would run to give others execute permission to it:

chmod o+x /home/$USER

If you prefer giving only a specific user, called USERNAME, access then use setfacl to do this instead:

setfacl -m u:USERNAME:x /home/$USER

Notice the 'u' before "u:USERNAME". The 'u' means "user" and replace USERNAME with the user's name.

If you want to only give a specific group, e.g., SN_staff, access then use setfacl as follows:

setfacl -m g:SN_staff:x /home/$USER

Notice the 'g' before "g:SN_staff". The 'g' means "group" and replace "SN_staff" with the name of the group you want to provide access to.

Similarly, you will want to provide access to the directories under the "top" one except the last one. If you wanted to grant access to your "projectx" directory located here:

/home/$USER/research/projectx

then you will need to grant execute permission to both /home/$USER and /home/$USER/research, e.g.,

chmod o+x /home/$USER
chmod o+x /home/$USER/research

or use setfacl to do the same for some USERNAME:

setfacl -m u:USERNAME:x /home/$USER
setfacl -m u:USERNAME:x /home/$USER/research

or some group (e.g., SN_staff):

setfacl -m g:SN_staff:x /home/$USER
setfacl -m g:SN_staff:x /home/$USER/research

With the last directory, you will want to either grant to all content within that directory:

  1. read and execute permissions without the ability to modify/write,
  2. read, write, and execute permissions.

To do Item 1 (i.e., grant read and execute but no write) with setfacl for some user name to the "projectx" directory:

setfacl -R -m u:USERNAME:rwX /home/$USER/research/projectx

and for some group, e.g., SN_staff, one would write:

setfacl -R -m g:SN_staff:rwX /home/$USER/research/projectx

Now, you can tell the users you want to be able to access this directory its FULL PATH, i.e.,

/home/yourloginname/research/projectx

and those users will be able to run:

cd /home/yourlogginname/research/projectx

to have the access you've granted. (Don't tell the user a path with $USER in it --that won't work: you must use the full path. If you are unsure, "cd" to that directory and run the "pwd" command which will output the full path to the "present working directory".)


Disabling Per-user/group Access: setfacl Method

At some point, you will want to revoke permissions granted to others. If you had previously provided access to your "projectx" directory using these commands for a directory:

chmod o+x /home/$USER
chmod o+x /home/$USER/research

then you would revoke access using:

chmod o-rwx /home/$USER
chmod o-rwx /home/$USER/research

Know that this will revoke all "other" users' access through these directories.

If you used setfacl, then run the same command you previously used but replace -m with -x.

For example, if you granted permissions using:

setfacl -m u:USERNAME:x /home/$USER
setfacl -m u:USERNAME:x /home/$USER/research
setfacl -R -m u:USERNAME:rwX /home/$USER/research/projectx

or:

setfacl -m g:SN_staff:x /home/$USER
setfacl -m g:SN_staff:x /home/$USER/research

then you would revoke these permissions using (respectively):

setfacl -x u:USERNAME /home/$USER
setfacl -x u:USERNAME /home/$USER/research
setfacl -R -x u:USERNAME /home/$USER/research/projectx

or:

setfacl -x g:SN_staff /home/$USER
setfacl -x g:SN_staff /home/$USER/research
setfacl -R -x g:SN_staff /home/$USER/research/projectx

You can verify that user's don't have access using getfacl.

A Brief Overview of the getfacl and setfacl Commands

One can see the ACL for a particular file/directory with the getfacl command, eg.

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
group::r-x
other::--x

One uses the setfacl command to modify the ACL for a file/directory. To add read and execute permissions for this directory for user ricky, eg.

[sn_user@hnd50 ~]$ setfacl -m u:ricky:rx /work/sn_user

Now there is an entry for user:ricky with r-x permissions:

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
user:ricky:r-x
group::r-x
mask::r-x
other::--x

To remove an ACL entry one uses the setfacl command with the -x argument, eg.

[sn_user@hnd50 ~]$ setfacl -x u:ricky /work/sn_user

Now there is no longer an entry for ricky:

[sn_user@hnd50 ~]$ getfacl /work/sn_user
getfacl: Removing leading '/' from absolute path names
# file: work/sn_user
# owner: sn_user
# group: sn_user
user::rwx
group::r-x
mask::r-x
other::--x

Note that if one wants to provide access to a nested directory then the permissions need to be changed on all the parent directories using the -R flag. Please see the man pages for these commands man getfacl; man setfacl for further information. If you'd like help utilizing ACLs please email help@sharcnet.ca.

I am new to parallel programming, where can I find quick references at SHARCNET?

SHARCNET has a number of training modules on parallel programming using MPI, OpenMP, pthreads and other frameworks. Each of these modules has working examples that are designed to be easy to understand while illustrating basic concepts. You may find these along with copies of slides from related presentations and links to external resources on the Main Page of this training/help site.

I am new to parallel programming, can you help me get started with my project?

Absolutely. We will be glad to help you from planning the project, architecting your application programs with appropriate algorithms and choosing efficient tools to solve associated numerical problems to debugging and analyzing your code. We will do our best to help you speed up research.

Can you install a package on a cluster for me?

Certainly. We suggest you make the request by sending e-mail to help@sharcnet.ca, or opening a problem ticket with the specific request.

I am in a process of purchasing computer equipment for my research, would you be able to provide technical advice on that?

If you tell us what you want, we may be able to help you out.

Does SHARCNET have a mailing list or user group?

Yes. You may subscribe to one or more mailing lists on the email list page available once you log into the web portal. To find it, please go to MyAccount - Settings - Details in the menu bar on the left and then click on Mail on the "details" page. Don't forget to save your selections.

How do I add/remove myself to/from a SHARCNET mailing list?

To add/remove yourself to/from a SHARCNET mailing list, do the following:

  1. Log in to the SHARCNET portal: https://www.sharcnet.ca/
  2. Click on the My Account menu item.
  3. Click on the Settings menu item under My Account.
  4. Click on the Details menu item under Settings.
  5. Click on the Mail link near the bottom of the page.

The page that appears has checkboxes that allow you to add/remove yourself to/from a SHARCNET mailing list. To add yourself, click the checkbox on the line of the mailing list you are interested in; to remove yourself, uncheck the checkbox instead. Finally, when done, be sure to click the Save button at the bottom of the page to record these changes.

Does SHARCNET provide any training on programming and using the systems?

Yes. SHARCNET provides workshops on specific topics from time to time and offers courses at some sites. Every June, SHARCNET holds an annual summer school with a variety of in-depth, hands-on workshops. All materials from past workshops/presentations can be found on the SHARCNET web portal.

SHARCNET also offers a series of online seminars. These are announced via the SHARCNET events mailing list and one can see the schedule at the SHARCNET event calendar. Past seminars are recorded, a full listing is in the Online_Seminars page.

How do I watch tickets I don't own?

There are two ways. First, to view the tickets of user "USERID", type the URL like below:

https://www.sharcnet.ca/my/problems/view?username=USERID

,where USERID is the user you want to see. In the "Actions" column, click on "watch" for problems that you want to follow. This should enable you to receive notifications if any of the problems you are "watching" are updated.

If you want to do the same thing for tickets posted by other members in your group, just access their userpage (listed on https://www.sharcnet.ca/my/users/show/361 )

The other way is to use 'search box' in the SHARCNET website. By typing the ticket number or userid, you can do similar thing described above.

Attending SHARCNET Webinars

IMPORTANT! : recently we saw an increased number of users who could not attend our webinars because of technical issues with Vidyo (our videoconferencing platform). It appears that the fix is not to rely on the in-browser support for Vidyo (which currently seems to be buggy), but instead install the Vidyo application. Here are the steps for those who experience issues with Vidyo:

  1. Install the Vidyo application from this link: https://vidyo.computecanada.ca (they have versions for all major platforms - Windows, Mac, Linux).
  2. Open the installed application (there is no need to login).
  3. Go to your browser, and enter / click on the "SN Seminars Vidyo room" link we provided (in our emails, and in the calendar on sharcnet.ca).
  4. The Vidyo app should now connect to the webinar video stream.

SHARCNET makes a number of seminar events available online (New User Seminar, general interest talks, etc.) using software/services from Vidyo. Vidyo allows both the presenter and the attendees to offer or participate in online seminars using their web browser plus a small application one has to install when the service is used for the first time. If this is your first Vidyo seminar please join the seminar ahead of the official start, to sort out any technical issues. Vidyo is supported on most platforms, both "stationary" (Windows, MacOS, Linux) and mobile (iOS, Android).

Please note that if your device has a microphone (highly recommended) and/or webcam, they will be used by Vidyo to transmit your audio and video to all seminar participants. They will be on by default, but you can always disable them by clicking on a corresponding button at the top of your Vidyo window. We ask that all attendees keep their microphones muted, unless you want to ask something.

We normally record our seminars, and make them available to all SHARCNET users. All recent and new webinars are posted on our youtube channel, http://youtube.sharcnet.ca . The links to the video recordings, slides and abstracts can be found on our online seminars page.

If you are using Vidyo to attend one of our weekly "Introduction to SHARCNET" seminars (for new users), please use your SHARCNET login name or your full name as a Vidyo login name. This will help us to record your attendance, so you can be enabled to take the online quiz after the seminar.

If you do not have headphones and or microphone, we provide a toll free number call-in option: 1-855-728-4677, ext 5542. Alternatively, you may wish to watch the seminar as a live webcast, although in this mode you will be unable to interact with the presenter.

We strongly recommend users to pre-register for the seminar they plan to attend, using the following link: Event Registration.

To receive email notifications about upcoming General Interest seminars, if you are a SHARCNET user please enable "Events" mailing list in your settings, and if you are not a SHARCNET user please send an email to syam@sharcnet.ca .

Please note that times for our webinars are for the Eastern Time (EST/EDT) zone.

Research at SHARCNET

Where can I find what other people do at SHARCNET?

You may find some of the research activities at SHARCNET by visiting our research initiatives and researcher profile pages.

I have a research project I would like to collaborate on with SHARCNET, who should I talk to?

You may contact SHARCNET head office or contact members of the SHARCNET technical staff.

How can I contribute compute resources to SHARCNET so that other researchers can share it?

Most people's research is "bursty" - there are usually sparse periods of time when some computation is urgently needed, and other periods when there is less demand. One problem with this is that if you purchase the equipment you need to meet your "burst" needs, it'll probably sit, underutilized, during other times.

An alternative is to donate control of this equipment to SHARCNET, and let us arrange for other users to use it when you are not. We prefer to be involved in the selection and configuration of such equipment. Some of SHARCNET's most useful clusters were created this way — Goblin, Wobbie and others were purchased with user contributions, and Orca's newest/fastest nodes are contributed. Our promise to contributors is that as much as possible, they should obtain as much benefit from the cluster as if it were not shared. Owners get preferential access. Naturally, owners are also able to burst to higher peak usage, since their equipment has been pooled with other contributions. (Technically, SHARCNET cannot itself own such equipment — it remains owned by the institution in question, and will be returned to the contributor upon request.) If you think this model will also work for you and you would like to contribute your computational resource to help the research community at SHARCNET, you can contact us for such arrangement.

I do not know much about computation, nor is it my research interest. But I am interested in getting my research done faster with the help of the high performance computing technology. In other words, I do not care about the process and mechanism, but only the final results. Can SHARCNET provide this type of help?

We will be happy to bring the technology of high performance computing to you to accelerate your research, if at all possible. If you would like to discuss your plan with us, please feel free to contact our high performance computing specialists. They will be happy to listen to your needs and are ready to provide appropriate suggestions and assistance.

I am a faculty member from non-SHARCNET member institution. Could I apply for an account and sponsor my student's accounts?

As long as you and your students can obtain a Compute Canada account you will be able to obtain SHARCNET accounts. See above for further information on what is required to obtain an account.

I need access to more CPU cores or storage than are available by default, what programs exist to support demanding computation?

SHARCNET participates in the Compute Canada NRAC (National Resource Allocation Competition) and provides a continual competition for groups that require more than the default level of access to our resources. Please see Dedicated Resources for further information.

I heard SHARCNET offers fellowships, where can I get more information?

SHARCNET no longer actively runs a fellowship program. You may find information regarding past fellowships and other dedicated resource opportunities on the Research Fellowships page of the web portal.

I would like to do some research at SHARCNET as a visiting scholar, how should I apply?

In general, you will need to find a hosting department or a person affiliated with one of the SHARCNET institutions. You may also contact us directly for more specific information.

I would like to send my students to SHARCNET to do some work for me. How should I proceed?

See above.



Contacting SHARCNET

How do I contact SHARCNET for research, academic exchanges, and technical issues?

Please contact SHARCNET head office.

How do I contact SHARCNET for business development, education and other issues?

Please contact SHARCNET head office.

How do I contact a specific staff member at SHARCNET?

See staff directory for contact information.

How to Acknowledge SHARCNET in Publications

How do I acknowledge SHARCNET in my publications?

We recommend one cite the following:

This work was made possible by the facilities of the Shared Hierarchical 
Academic Research Computing Network (SHARCNET:www.sharcnet.ca) and Compute/Calcul Canada.

I've seen different spellings of the name, what is the standard spelling of SHARCNET?

We suggest the spelling SHARCNET, all in upper case.


What types of research programs / support are provided to the research community?

Our overall intent is to provide support that can both respond to the range of needs that the user community presents and help to increase the sophistication of the community and enable new and larger-in-scope applications making use of SHARCNET's HPC facilities. The range of support can perhaps best be understood in terms of a pyramid:

Level 1

At the apex of the pyramid, SHARCNET supports a small number of projects with dedicated programmer support. The intent is to enable projects that will have a lasting impact and may lead to a "step change" in the way research is done at SHARCNET. Inter-disciplinary and inter-institutional projects are particularly welcomed. For the latest information about the program, including application guidelines, please see the Programming Competition page in our web portal. For information about projects that have been supported please see: Dedicated Programming Support Projects.

Level 2

The middle layers of support are provided through a number of initiatives.

These include:

  • Programming support of more modest duration (several days to one month engagement, usually part time)
  • Training on a variety of topics through workshops, seminars and online training materials
  • Consultation. This may include user-initiated interactions on particular programs, algorithms, techniques, debugging, optimization etc., as well as unsolicited help to ensure effective use of SHARCNET systems
  • Site Leaders play an important role in working with the community to help researchers connect with SHARCNET staff and to obtain appropriate help and support.

Level 3

The base level of the pyramid handles the very large number of small requests that are essential to keeping the user community working effectively with the infrastructure on a day-to-day basis. Several of these can be answered by this FAQ; many of the issues are presented through the ticketing system. The support is largely problem oriented with each problem being time limited.