From Documentation
Revision as of 13:32, 27 September 2018 by Isaac (Talk | contribs) (What about really large files or if I get the error 'No space left on device' in /gwork or /scratch?)

Jump to: navigation, search


Note: Some of the information on this page is for our legacy systems only. The page is scheduled for an update to make it applicable to Graham.

Contents

Logging in to Systems, Transferring and Editing Files

How do I login to SHARCNET?

There is no single point of entry at present. "Logging in to SHARCNET" means you login to one of the SHARCNET systems. A complete list of SHARCNET systems can be found on our facilities page.

Please note that graham system requires Compute Canada credentials to login.

Unix/Linux/OS X

To login to a system, you need to use an Secure Shell (SSH) connection. If you are logging in from a UNIX-based machine, make sure it has an SSH client (ssh) installed (this is almost always the case on UNIX/Linux/OS X). If you have the same login name on both your local system and SHARCNET, and you want to login to, say, saw, you may use the command:

ssh saw.sharcnet.ca

If your SHARCNET username is different from the username on your local systems, then you may use either of the following forms:

ssh saw.sharcnet.ca -l username
ssh username@saw.sharcnet.ca

If you want to eastablish an X window connection so that you can use graphics applications such as gvim and xemacs, you can add a -Y to the command:

ssh -Y username@saw.sharcnet.ca

This will automatically set the X DISPLAY variable when you login.

IMPORTANT: to login to Graham, you have to use your Compute Canada credentials (login name and password), not your SHARCNET credentials!

Windows

If you are logging from a computer running Windows and need some pointers we recommend consulting our SSH tutorial.

What is the difference between Login Nodes and Development Nodes?

Login Nodes

Most of our clusters have distinct login nodes associated with them that you are automatically redirected to when you login to the cluster (some systems are directly logged into, eg. SMPs and smaller specialty systems). You can use these to do most of your work preparing for jobs (compiling, editing configuration files) and other low-intensity tasks like moving and copying files.

You can also use them for other quick tasks, like simple post-processing, but any significant work should be submitted as a job to the compute nodes. On most login nodes, each process is limited to 1 cpu-hour; this will be noticable if you perform anything compute-intensive, and can affect IO-oriented activity as well (such as very large scp or rsync operations.)

Here is an example of logging in and being redirected to a saw login node, in this case saw-login1:

localhost:~ sn_user$ ssh saw.sharcnet.ca
Last login: Fri Oct 14 22:38:40 2011 from localhost.your_institution.ca

Welcome to the SHARCNET cluster Saw.
Please see the following URL for status of this and other clusters:
https://www.sharcnet.ca/my/systems


[sn_user@saw-login1 ~]$ hostname
saw-login1

Development Nodes

On some systems there are also development nodes which can be used to do slightly more resource intensive, interactive work. For the most part these are identical to cluster login nodes, however they are not visible outside of their respective cluster (one can only reach them after logging into a login node) and they have more modest resource limits in place, allowing for the ability to do quick interactive testing outside of the job queuing system. Please see the help wiki pages for the respective clusters, Orca, Saw and Kraken, for further details on how one can use these nodes.

How can I suspend and resume my session?

The program screen can start persistent terminals from which you can detach and reattach. The simplest use of screen is

screen -dR

which will either reattach you to any existing session or create a new one if one doesn't exist. To terminate the current screen session, type exit. To detach manually (you are automatically detached if the connection is lost) press ctrl+a followed by d, you can the resume later as above (ideal for running background jobs). Note that ctrl+a is screen's escape sequence, so you have to do ctrl+a followed by a to get the regular effect of pressing ctrl+a inside a screen session (e.g., moving the cursor to the start of the line in a shell).

For a list of other ctrl+a key sequences, press ctrl+a followed by ?. For further details and command line options, see the screen manual (or type man screen on any of the clusters).

Other notes:

  • If you want to create additional "text windows", use Ctrl-A Ctrl-C. Remember to type "exit" to close it.
  • To switch to a "text window" with a certain number, use Ctrl-A # (where # is 0 to 9).
  • To see a list of window numbers use Ctrl-A w
  • To be presented a list of windows and select one to use, use Ctrl-A " (This is handy if you've made too many windows.)
  • If the program running in a screen "text window" refuses to die (i.e., it needs to be killed) you can use Ctrl-A K
  • For brief help on keystrokes use Ctrl-A ?
  • For extensive help, run "man screen".

What operating systems are supported?

UNIX in general. Currently, Linux is the only operating system used within SHARCNET.

What makes a cluster different than my UNIX workstation?

If you are familiar with UNIX, then using a cluster is not much different from using a workstation. When you login to a cluster, you in fact only log in to one of the cluster nodes. In most cases, each cluster node is a physical machine, usually a server class machine, with one or several CPUs, that is more or less the same as a workstation you are familiar with. The difference is that these nodes are interconnected with special interconnect devices and the way you run your program is slightly different. Across SHARCNET clusters, you are not expected to run your program interactively. You will have to run your program through a queueing system. That also means where and when your program gets to run is not decided by you, but by the queueing system.

Which cluster should I use?

Each of our clusters is designed for a particular type of job. Our cluster map shows which systems are suitable for various job types.

What programming languages are supported?

Those primary programming languages such as C, C++ and Fortran are supported. Other languages, such as Java, Pascal and Ada, are also supported, but with limited technical support from us. If your program is written in any language other than C, C++ and Fortran, and you encounter a problem, we may or may not be able solve it within a short period of time. Note: this does not mean you can't use other languages like Matlab, R, Python, Perl, etc. We normally think of those as "scripting" languages, but that doesn't imply that good HPC necessarily requires an explicitly-compiled language like Fortran.

How do I organize my files?

To best meet a range of storage needs, SHARCNET provides a number of distinct storage pools that are implemented using a variety of file systems, servers, RAID levels and backup policies. These different storage locations are summarized as follows:

Legacy system

place quota** expiry access purpose backed-up?
/home 10 GB none unified sources, small config files Yes
/work 1 TB none unified* active data files No
/scratch none 2 months per-cluster temporary files, checkpoints No
/tmp none 2 days per-node node-local scratch No
/freezer 2 TB 2 years unified (login nodes only) long term data archive No
  • The quota column indicates if the file system has a per-user limit to the amount of data they can store.
  • The expiry column indicates if the file system automatically deletes old files and the timescale for deletion.
  • The access column indicates the scope, or availability of the file system. "unified" means that when you login, regardless of cluster, you will always see the same directory.

* May be less and not unified on some of our clusters (eg. requin and some of the specialty systems), type "quota" when you log into a cluster for up to date information.
** There is also a quota on the maximum number of files a user can have on any file system. Currently the limit is 1,000,000.

For more detailed information please go to the Using Storage article.

Where is my /work folder?

/work is an automounted filesystem. When you first login to a system your directory may not appear in the /work directory. As soon as you access it (cd to it, or ls it or it's contents), the system will make your directory visible and it will appear in the /work directory. If you are connecting with a gui client you need to go to the full path of your work directory /work/YOUR_USER_NAME .

Best storage to use for jobs

Since /home is remote on most clusters and is used frequently by all users, it's important that it not be used significantly for jobs (eg. reading in a small configuration file from /home is ok - writing repeatedly to many different files in /home during the course of your jobs is not).

One can do significant I/O to /work from jobs, but it is also remote to most clusters. For this reason, to obtain the best file system throughput you should use the /scratch file system. In some cases jobs may be able to make use of /tmp for local caching, but it is not recommended as a storage target for regular output.

For users who want to learn more about optimizing I/O at SHARCNET please read Analyzing I/O Performance.

Cluster-local Scratch storage

/scratch has no quota limit - so you can put as much data in /scratch/<userid> as you want, until there is no more space. The important thing to note though, is that all files on /scratch that are over 62 days old will be automatically deleted (please see this knowledge base entry for details on how /scratch is purged of old files).

Backups

Backups are in place for your home directory ONLY. Scratch and global work are not backed up. In general we store one version of each file for the previous 5 working days, one for each of the 4 previous weeks, and one version per month before that. Backups began in September 2006.

Node-Local Storage

/tmp may be unavailable for use on clusters where there are no local disks on the compute nodes. Users should try to use /scratch instead, or email help@sharcnet.ca to discuss using node-local storage.

Archival Storage

To backup large volumes of data that don't need to stay available on global work or local scratch use the /freezer filesystem.

Please note: unlike our old /archive file system, the new /freezer file system has both a size quota (2TB; going over the quota results in your submitted jobs not running - same as with /work), and an expiry: after 2 years your files will be deleted. See our storage policies page for details.

How do I organize my files? [Graham]

Filesystemgraham.png

How are file permissions handled at SHARCNET?

By default, anyone in your group can read and access your files. You can provide access to any other users by following this Knowledge Base entry.

All SHARCNET users are associated with a primary GID (group id) belonging to the PI of the group (you can see this by running id username , with your username). This allows for groups to share files without any further action, as the default file permissions for all SHARCNET storage locations (Eg. /gwork/user ) allows read (list) and execute (enter / access) permissions for the group, eg. they appear as:

  [cc_user@gra-login2 ~]$ ls -ld scratch/
   drwxrwx---+ 12 cc_user cc_user 4096 Jul 18 08:59 scratch/


Further, by default the umask value for all users is 0002, so any new files or directories will continue to provide access to the group.

Should you wish to keep your files private from all other users, you should set the permissions on the base directory to only be accessible to yourself. For example, if you don't want anyone to see files in your home directory, you'd run:

chmod 700 ~/

If you want to ensure that any new files or directories are created with different permissions, you can set your umask value. See the man page for further details by running:

man umask

For further information on UNIX-based file permissions please run:

man chmod

What about really large files or if I get the error 'No space left on device' in ~/project or ~/scratch?

If you need to work with really large files we have tips on optimizing performance with our parallel filesystems here.

How do I transfer files/directories to/from or between cluster?

Unix/Linux

To transfer files to and from a cluster on a UNIX machine, you may use scp or sftp. For example, if you want to upload file foo.f to cluster orca from your machine myhost, use the following command

myhost$ scp foo.f orca.sharcnet.ca:

assuming that your machine has scp installed. If you want to transfer a file from Windows or Mac, you need have scp or sftp for Windows or Mac installed.

If you transfer file foo.f between SHARCNET clusters, say from your home directory on orca to your scratch directory on graham, simply use the following command

[username@orc-login2:~]$ scp foo.f graham:/home/username/

If you are transferring files between a UNIX machine and a cluster, you may use scp command with -r option. For instance, if you want to download the subdirectory foo in the directory project in your home directory on saw to your local UNIX machine, on your local machine, use command

myhost$ scp -rp saw.sharcnet.ca:project/foo .

Similarly, you can transfer the subdirectory between SHARCNET clusters. The following command

[username@orc-login2:~]$ scp -rp graham:/scratch/username/foo .

will download subdirectory foo from your scratch directory on graham to your home directory on orca (note that the prompt indicates you are currently logged on to orca).

The use of -p option above will preserve the time stamp of each file. For Windows and Mac, you need to check the documentation of scp for features.

You may also tar and compress the entire directory and then use scp to save bandwidth. In the above example, first you login to orca, then do the following

[username@orc-login2:~]$ cd project
[username@orc-login2:~]$ tar -cvf foo.tar foo
[username@orc-login2:~]$ gzip foo.tar

Then on your local machine myhost, use scp to copy the tar file

myhost$ scp orca.sharcnet.ca:project/foo.tar.gz .

Note for most Linux distributions, tar has an option -z that will compress the .tar file using gzip.

Windows

You may read the instruction using ssh client. [[1]]

How can I best transfer large quantities of data to/from SHARCNET and what transfer rate should I expect?

In general, most users should be fine using scp or rsync to transfer data to and from SHARCNET systems. If you need to transfer a lot of files rsync is recommended to ensure that you do not need to restart the transfer from scratch should there be a connection failure. Although you can use scp and rsync to any cluster's login node(s), it is often best to use dtn.sharcnet.ca - it is dedicated to data transfer.

In general one should expect the following transfer rates with scp:

  • If you are connecting to SHARCNET through a Research/Education network site (ORION, CANARIE, Internet2) and are on a fast local network (this is the case for most users connecting from academic institutions) then you should be able to attain sustained transfer speeds in excess of 10MB/s. If your path is all gigabit or better, you should be able to reach rates above 50 MB/s.
  • If you are transferring data over the wider internet, you will not be able to attain these speeds, as all traffic that does not enter/exit SHARCNET via the R&E net is restricted to a limited-bandwidth commercial feed. In this case one will typically see rates on the order of 1MB/s or less.

Keep in mind that filesystems and networks are shared resources and suffer from contention; if they are busy the above rates may not be attainable

If you need to transfer a large quantity of data to SHARCNET and are finding your transfer rate to be slow please contact help@sharcnet.ca to request assistance. We can provide additional tips and tools to greatly improve data transfer rates, especially to systems/users outside of Ontario's regional ORION network. For example, we've observed speed-ups from <1 MB/s using scp to well over 10 MB/s between Compute Canada systems connected via CANARIE by using specialized data-transfer programs (eg. bbcp).

How do I access the same file from different subdirectories on the same cluster ?

You should not need copy large files on the same cluster (e.g. from one user to another or using the same file in different subdirectories). Instead of using scp you might consider issuing a "soft link" command. Assume that you need access to the file large_file1 in subdirectory /work/user1/subdir1 and you need it to be in your subdirectory /work/my_account/my_dir from where you will invoke it under the name my_large_file1. Then go to that directory and type:

ln -s /work/user1/subdir1/large_file1    my_large_file1

Another example, assume that in subdirectory /work/my_account/PROJ1 you have several subdirectories called CASE1, CASE2, ... In each subdirectory CASEn you have a slightly different code but all of them process the same data file called test_data. Rather than copying the test_data file into each CASEn subdirectory, place test_data above i.e. in /work/my_account/PROJ1 and then in each CASEn subdirectory issue following "soft link" command:

ln -s ../test_data  test_data

The "soft links" can be removed by using the rm command. For example, to remove the soft link from /work/my_account/PROJ1/CASE2 type following command from this subdirectory:

rm -rf test_data

Typing above command from subdirectory work/my_account/PROJ1 would remove the actual file and then none of the CASEn subdirectories would have access to it.

How are files deleted from the /scratch filesystems?

All files on /scratch that are over 2 months old (not old in the common sense, please see below) are automatically deleted. You will be sent an email notification beforehand warning you of any filesystems (not the actual files, however) where you may have files scheduled for deletion in the immediate future.

An unconventional aspect of this system is that it does not determine the age of a file based on the file's attributes, e.g., the dates reported by the stat, find, ls, etc. commands. The age of a file is determined based on whether or not its data contents (i.e., the information stored in the file) have changed, and this age is stored externally to the file. Once a file is created in /scratch/<userid> , reading it, renaming, changing the file's timestamps with the touch command, or copying it into another file are all irrelevant in terms of changing its age with respect to the purging system. The file will be expired 2 months after it was created. Only files where the contents have changed will have their age counter "reset".

Unfortunately, there currently exists no method to obtain a listing of the files that are scheduled for deletion. This is something that is being addressed, however there is no estimated time for implementation.

If you have data in /scratch that needs to persist (eg. configuration files, important simulation output) we recommend you stage it from /gwork or /freezer it as appropriate.

How to archive my data?

Presently SHARCNET provides the /freezer filesystem as a regularly accessible filesystem on the login nodes of our clusters (not the compute nodes!). To back up data which you'd like too keep, but don't expect to access in the foreseeable future, or to just keep a backup of data from global work or local scratch filesystems, one may use regular commands (cp, mv, rm, rsync, tar etc.), eg.

 cp /scratch/$USER/$SIMULATION /freezer/$USER/$SIMULATION

Be extremely careful when deleting your data from the Archive: there is no backup for the data!

Please note: unlike our old /archive file system, the new /freezer file system has both a size quota (2TB; going over the quota results in your submitted jobs not running - same as with /work), and an expiry: after 2 years your files will be deleted. See our storage policies page for details.

How can I check the hidden files in directory?

The "." at the beginning of the name means that the file is "hidden". You have to use the -a option with ls to see it. I.e. 'ls -a'.

If you want to display only the hidden files then type:

ls -d .*

Note: there is an alias which is loaded from /etc/bashrc (see your .bashrc file). The alias is defined by alias l.='ls -d .* --color=tty' and if you type:

l.

you will also display only the hidden files.

How can I count the number of files in a directory?

One can use the following command to count the number of files in a directory (in this example, your /work directory):

find /work/$USER -type f   | wc -l

It is always a good idea to archive and/or compress files that are no longer needed on the filesystem (see below). This helps minimize one's footprint on the filesystem and as such the impact they have on other users of the shared resource.

How to organize a large number of files?

With parallel cluster filesystems, you will get best I/O performance writing data to a small number of large files. Since all metadata operations on each of our parallel filesystems are handled by a single file server, depending on how many files are being accessed the server can become overwhelmed leading to poor overall I/O performance for all users. If your workflow involves storing data in a large number of files, it is best to pack these files into a small number of larger archives, e.g. using tar command

tar cvf archiveFile.tar directoryToArchive

For better performance with many files inside your archive, we recommend to use DAR (Disk ARchive utility), which is a disk analog of tar (Tape ARchive). Dar can extract files from anywhere in the archive much faster than tar. The dar command is available by default on sharcnet systems. It can be used to pack files into a dar archive by doing something like:

dar -s 1G -w -c archiveFile -g directoryToArchive

In this example we split the archive into 1GB chunks, and the archive files will be named archiveFile.1.dar, archiveFile.2.dar, and so on. To list the contents of the archive, you can type:

dar -l archiveFile

To temporarily extract files for post-processing into current directory, you would type:

dar -R . -O -x archiveFile -v -g pathToYourFile/fileToExtract

I am unable to connect to one of the clusters; when I try, I am told the connection was closed by the remote host

The most likely cause of this behaviour is repeated failed login attempts. Part of our security policies involves blocking the IP address of machines that attempt multiple logins with incorrect passwords over a short period of time---many brute-force attacks on systems do exactly this: looking for poor passwords, badly configured accounts, etc. Unfortunately, it isn't uncommon for a user to forget their password and make repeated login attempts with incorrect passwords and end up with that machine blacklisted and unable to connect at all.

A temporary solution is simply to attempt to login from another machine. If you have access to another machine at your site, you can shell to that machine first, and then shell to the SHARCNET system (as that machine's IP shouldn't be blacklisted). In order to have your machine unblocked, you will have to file a problem ticket as a system administrator must manually intervene in order to fix it.

NOTE: there are other situations that can produce this message, however they are rarer and more transient. If you are unable to log in from one machine, but can from another, it is most likely the IP blacklisting that is the problem and the above will provide a temporary work-around while your problem ticket is processed.

I can login successfully using WinSCP but I can't find my /work directory and files

Windows tools like winscp want to list the /work directory to let users click to traverse. this doesn't work because of automounting (and also makes no sense given that there would be >4000 entries.) to a commandline user, this is a non-issue, since anything which accesses /work/$user will instantiate the mount (ls -ld /work/$user/ for instance, or cd).

It is recommended to make a default link from /work/$user to /home/$user/work in WinSCP setting.

I am unable to ssh/scp from SHARCNET to my local computer

Most campus networks are behind some sort of firewall. If you can ssh out to SHARCNET, but cannot establish a connection in the other direction, then you are probably behind a firewall and should speak with your local system administrator or campus IT department to determine if there are any exceptions or workarounds in place.

SSH tells me SOMEONE IS DOING SOMETHING NASTY!?

Suppose you attempt to login to SHARCNET, but instead get an alarming message like this:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that the RSA host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
fe:65:ab:89:9a:23:34:5a:50:1e:05:d6:bf:ec:da:67.
Please contact your system administrator.
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
Offending key in /home/user/.ssh/known_hosts:42
RSA host key for requin has changed and you have requested strict checking.
Host key verification failed. 

SSH begins a connection by verifying that the host you're connecting to is authentic. It does this by caching the hosts's "hostkey" in your ~/.ssh/known_hosts file. At times, a hostkey may be changed legitimately; when this happens, you may see such a message. It's a good idea to verify this with us, you may be able to check the fingerprint yourself by logging into another sharcnet system and running:

ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key.pub 

If the fingerprint is OK, the normal way to fix the problem is to simply remove the old hostkey from your known_hosts file. You can use your choice of editor if you're comfortable doing so (it's a plain text file, but has long lines). On a unix-compatible machine, you can also use the following very small script (Substitute the line(s) printed in the warning message illustrated above for '42' here.):

perl -pi -e 'undef $_ if (++$line == 42)' ~/.ssh/known_hosts

Another solution is brute-force: remove the whole known_hosts file. This throws away any authentication checking, and your first subsequent connection to any machine will prompt you to accept a newly discovered host key. If you find this prompt annoying and you aren't concerned about security, you can avoid it by adding a text file named ~/.ssh/config on your machine with the following content:

StrictHostKeyChecking no

Ssh works, but scp doesn't!

If you can ssh to a cluster successfully, but cannot scp to to it, the problem is likely that your login scripts print unexpected messages which confuse scp. scp is based on the same ssh protocol, but assumes that the connection is "clean": that is, that it does not produce any un-asked-for content. If you have something like:

echo "Hello, Master; I await your command..."

scp will be confused by the salutation. To avoid this, simply ensure that the message is only printed on an interactive login:

if [ -t 0 ]; then
    echo "Hello, Master; I await your command..."
fi

or in csh/tcsh syntax:

if ( -t 0 ) then
    echo "Hello, Master; I await your command..."
endif

How do I edit my program on a cluster?

We provide a variety of editors, such as the traditional text-mode emacs and vi (vim), as well as a simpler one called nano. If you have X on your desktop (and tunneled through SSH), you can use the GUI versions (xemacs, gvim).

If your desktop supports FUSE, it's very convenient to simply mount your home tree like this:

mkdir sharcnet
sshfs orca.sharcnet.ca: sharcnet

you can then use any local editor of your choice.

If you run emacs on your desktop, you can also edit a remote file from within your local emacs client using Tramp, opening and saving a file as /username@cluster.sharcnet.ca:path/file.