From Documentation
Jump to: navigation, search

Storage on Graham and Cedar

For information about storage on the new national systems, please go to Storage_and_file_management page on the Compute Canada wiki.

What storage is available

The Sharcnet clusters have a variety of storage systems available to users. Which one you place your files into depends on your specific needs. In all cases, user directories are stored as (filesystem)/(userid), where the user "sharcnet" would find their home directory as /home/sharcnet, their work directory as /work/sharcnet, and scratch as /scratch/sharcnet. The only exceptions to this are on the kraken login nodes, where sub-cluster specific scratch directories are stored as /scratch/(subcluster)/(user), thus placing the sharcnet user's scratch directory for whale nodes on /scratch/wha/sharcnet while on kraken login nodes. The kraken compute nodes follow the standard /scratch/(userid) pattern.

Below is a list of the filesystems that are available on the Sharcnet clusters:

  1. /home
    • Space: 10 GB
    • Purpose: Storage of configuration files, and small source code trees
    • Available on: all login and compute nodes, and Visualization machines
    • Quota type: Hard limit - once exceeded, no more files can be written
  2. /scratch
    • Space: Variable depending on cluster
    • Purpose: Temporary storage for fast access to files for data processing. /scratch storage should absolutely not be used as a long term storage location.
    • Available on: all login and compute nodes of an individual cluster have access to that cluster's scratch filesystem. Kraken nodes use the scratch for their local sub-cluster, and all kraken sub-cluster scratches are available on the login nodes. Visualization machines have individual local scratch filesystems.
    • Quota type: Timed expiry - files unchanged for 62 days are automatically removed.
  3. /work
    • Space: Global work (on most clusters) has 1TB, local work (on clusters mako and requin) has 200GB quota
    • Purpose: Long term storage of source code, program code, and data that is being actively used
    • Available on: all login and compute nodes - mako cluster has access only to it's own /work directories, requin uses local /oldwork, and mounts global work as /gwork, and all other clusters and Visualization machines mount global work as /work
    • Quota type: Soft limit - once exceeded, limits on cluster resources are enforced until usage is below limits again
  4. /freezer
    • Space: 2 TB
    • Purpose: Long term storage of data that is not currently being used, but may be needed later
    • Available on: All cluster login nodes
    • Quota type: 2 years expiry
  5. /tmp
    • Space: Small, varies by cluster and node
    • Purpose: Very short term, local data storage during calculations. /tmp files can not be relied on to remain past the end of a job's run.
    • Available on: node-local storage, each node has an independent /tmp drive which is not accessible across the cluster, or on login nodes.
    • Quota type: Periodic purging of /tmp drive between running jobs.

Quota and how it works

Space usage on the home and work filesystems is monitored through a quota system. To see your current usage according to this system, you can use the quota command when logged into a cluster or visualization machine, like this:

[sharcnet@req769:~] quota
Filesystem           Limit       Used            File Count   Checked
jones:/home          10 GB      *11.3 GB (112%)        1,986    12h ago
lundun:/work         1 TB        20.9 MB (0%)        323,313   10h ago

The meanings of the sections of the output are as follows:

Filesystem: Indicates the cluster, and directory on which the user's data in question is stored. The special clusters 'lundun' and 'gulf' represent the global work directories, which are accessible across all of Sharcnet's clusters and visualization machines.

Limit: Indicates the maximum amount of storage space you are allowed to access on the filesystem in question.

Used: Indicates the amount of space currently occupied by your files on the indicated filesystem. Any entry which is over the limit will be marked with a * - in the displayed example above, the sharcnet user is over their /home quota limit.

File Count: Indicates the total number of files contained in your directory on the filesystem in question.

Checked: Indicates how long ago the most recent complete usage check was finished on the indicated filesystem. Quota scans are typically run every 24 hours, starting just after midnight, and depending on which filesystem and cluster can take anywhere from 5 minutes to several hours to complete.

Additionally, if your account has had resource limitations applied to it due to being over quota on a /work filesystem for too long, a warning about this will be displayed before the regular output of the quota command.

As monitoring your quota is a good idea, you may want to add a quota line to your .bashrc file to display your current usage every time you log into a cluster, so that you will become aware of any overages as soon as you log in.

Last, if you are the owner of a Dedicated Resources project, your DR project directory will also be listed in the output from the quota command, and will be labelled as such, like this:

[sharcnet@req769:~] quota
Filesystem               Limit       Used            File Count   Checked
jones:/home              10 GB      *11.3 GB (112%)       1,986    12h ago
lundun:/work             1 TB        20.9 MB (0%)       323,313   12h ago
gulf:/work/nrap12345     15 TB       12.2 TB (81%)    1,343,636   13h ago


Home Quota

Effects of Overage

Typical /home quota is 10 GB per user and is enforced as a hard limit. This means that when your usage exceeds the allowed space, you will not be able to write additional files to your home directory, and will receive write errors if you attempt to do so. No other restrictions are placed on accounts which have exceeded their quota on the /home filesystem, however as the default job submission will place log entries and some output from your job into your home directory, this data from your jobs may become unavailable if you were over your home quota when the job completes.

Fixing Overage

To correct overages in your home directory, it is necessary to log into a Sharcnet cluster, and remove files from your home directory, either through deletion, or moving the files to the /work or /freezer filesystems. Once your usage is below the limits, you will be able to make use of the directory again immediately, regardless of the output from the quota command, which will only update once per day.

To verify immediately the amount of space in your directory, you can use this command:

[sharcnet@bul129:~] du -sh /home/sharcnet
124M     /home/sharcnet

One thing to note - the du command can sometimes take some time to run on the filesystem, so you may need to be patient with it.

Work Quota

Effects of Overage

Global /work directories have a quota of 1TB, and cluster local /work filesystems have quotas of 200GB, enforced with a soft limit, meaning that while you can still write files to your directory while over your quota, other resource limitations are placed on your account when you are found to be over quota.

Temporarily exceeding your /work quota at the end of a job and removing the excess files immediately will have no effect on your account. Quota scans begin just after midnight, and as long as your usage is back below your quota by then, the system will not even notice that there was an issue.

If your /work usage is still over quota when the nightly filesystem scan runs, your account will be flagged as over quota, and placed into a 3-day grace period before resource limits are applied. This will cause your usage of that filesystem in the 'quota' command's output to be marked with a *, but otherwise will not place any other limits on your account, or your ability to submit and run jobs, to allow you time to clean up your excess usage without penalty.

If your /work usage is over quota for more than three days, your account will be flagged as over quota, and resource limitations will be applied which prevent you from submitting new jobs to the clusters, and will prevent any already queued jobs from running. You can verify if your account has had resource limits placed on it by using the groups command while logged into a cluster, and checking of the group 'ovrquota' is in the resulting list, like this:

[sharcnet@gb2:~] groups
sharcnet certlvl1 guelph_users fairshare-1 ovrquota

If you have jobs currently running or queued when your account is noted as being over quota, a warning email will be sent to your Sharcnet account informing you of the overage. Additionally, if your account is over quota at the end of the week (Sunday night), a warning email will be sent to your Sharcnet account even if no jobs are currently running.

Fixing Overage

If your account has had resource limitations placed on it due to being over quota for more than three days, there are two steps to correcting the problem:

First, you need to remove sufficient files from your directory to get your actual usage below your quota - this can be done by just deleting the files, copying them to your own local storage and deleting them, or by moving files to /freezer for long term storage. You can verify that your usage is below the limit by using the du command, like this:

[sharcnet@saw-login1:~] du -sh /work/sharcnet
843G     /work/sharcnet

As with the /home filesystem, the du command can sometimes take a considerable amount of time to run, especially if you have a large number of files and directories.

The second step in getting the over quota resource limitations removed from your account is to wait for the next quota scan to complete, at which time your overquota status will be cleared, and the 'ovrquota' group will be removed from your account.

If you have left any login sessions running from when you were over quota, you may need to log out of those sessions, and log back in in order to completely clear the ovrquota status. If this is the case, then the 'quota' command will report you as not being over on your usage, but the 'groups' command will still show you belonging to the 'ovrquota' group.

Scratch Expiration

The scratch filesystems are intended for short term storage of data files to provide local filesystem access for each cluster, which allows faster file access than the global /work filesystem. Because there is no space limitation, the scratch filesystems work on an age-based expiration system, where files that have not been altered for more than the expiration period are removed as "stale". The current expiration time for scratch filesystems is 62 days.

Because of the expiration, it is important to remember that you should not use /scratch for long term storage, or storage of important output files - any files produced by jobs that you need for long term should be moved out of /scratch into your /work or /freezer directory immediately after your job has completed.

Additionally, due to the lack of space usage limitations on the /scratch filesystem, it is possible for a few users to occupy the entire scratch filesystem, causing all of the users on that cluster to be unable to write to the drive. To check that space is available, you can use this command:

[sharcnet@orc-login1:~] df -h /scratch/
Filesystem            Size  Used Avail Use% Mounted on
10.27.9.132@o2ib:10.27.9.133@o2ib:/orcalfs
                      54T   47T  6.9T  88% /orc_lfs

In this case, the output indicates that the scratch filesystem on the Orca cluster has a total of 54TB, 47TB of which is currently in use, leaving only 6.9TB available for other users, with the filesystem being 88% full. A good practice would involve removing your old data files after your job has completed, to ensure that the filesystem does not become excessively filled with unwanted data.

Archive Storage

Note: recently the /archive filesystem has been discontinued and replaced by /freezer

Long term archival storage of files on Sharcnet clusters is provided through the /freezer filesystem. Please note: unlike our old /archive file system, the new /freezer file system has both a size quota (2TB; going over the quota results in your submitted jobs not running - same as with /work), and an expiry: after 2 years your files will be deleted. See our storage policies page for details. The /freezer filesystem is accessible only from the login nodes of the various clusters, and to use it, you simply move the files you wish to archive into your /freezer directory. As an example, the "sharcnet" user, wishing to move an entire directory called My_Results from their /work directory to /freezer, would do so by logging into the cluster, and using these commands:

[sharcnet@gup-hn:~] cd /work/sharcnet
[sharcnet@gup-hn:/work/sharcnet] mv My_Results /freezer/sharcnet/

Please note that if you are moving files into /freezer, you should either use the mv command to move the files, or delete the old files after copying them.

For moving substantial amounts of data the users should use the machine dtn.sharcnet.ca which is a dedicated data transfer node.

Additionally, for source code storage and backups, SHARCNET has set up a GIT repository, which has usage instructions here: https://www.sharcnet.ca/help/index.php/Git

Requesting an Exemption

Users who have need of a larger amount of storage space can submit a request for additional space to help@sharcnet.ca for an extension. The request should include:

  • Which filesystem the quota extension is requested to be on (global work, requin work, mako work, home directory, or scratch timeout period)
  • What the space will be used for
  • Why the space is needed, rather than simply placing most of the data in /freezer

Advice for best practices

Watch your quota

The first, and most important step to take in maintaining your disk usage on Sharcnet, is to keep a close eye on how much space you are using with the quota command. The easiest way to do this is to add it to the end of the .bashrc file in your home directory, so that your quota status will be displayed to you every time you log into a Sharcnet cluster or visualization machine.


Clean up unused files

Any data which you are not currently using for your work should be moved to /freezer storage - this can save you from headaches of running out of space in your work directory, and also keeps the scratch filesystems free of un-needed data, and available for everyone.

Any data you no longer need at all should be deleted.


Small numbers of large files is better than large numbers of small files

Large numbers of files in a single directory will cause file access to be slow for that directory. To prevent this, first you should try to make use of sub-directories to divide up your files into more manageable groups. Generally, you can place around 1000 files in a directory before access to the directory starts to get slowed by the number of files.

For archiving, large collections of small files should be archived together with the tar command, like this:

[sharcnet@mako2:/work/sharcnet] tar -c -v --remove-files -f /freezer/sharcnet/worlds-20090324.tar world*

This would collect all of the files with names starting with "world" into a single archive file named "/freezer/sharcnet/worlds-20090324.tar", indicating that they were "world" files from March 24th, 2009. The "/freezer/sharcnet/" part means that this file will be created in the /freezer filesystem, thus also removing it from occupying disk quota.

The parameters of the tar command have the following meanings:

"-c" means "Create", this is used to create a new archive.

"-v" means "Verbose", this causes tar to print out a list of all of the files it is adding to the archive, as it adds them.

"--remove-files" causes tar to delete the old versions of the files after they are successfully added to the archive file.

"-f worlds-20090324.tar" causes tar to output the results to the file worlds-20090324.tar. Without this, tar will simply display the contents of the file to the screen, instead of actually creating the archive file.

Optionally, the "-z" option can be added to compress your files as they are archived - if you do this, you should add .gz to the end of your archive file name.

To view a list of the files contained in an archive, you can use this command:

[sharcnet@mako2:/work/sharcnet] tar -t -v -f /freezer/sharcnet/worlds-20090324.tar
 -rw-r--r-- sharcnet/sharcnet 4529 2009-03-22 09:44:21 world-novirt0
 -rw-r--r-- sharcnet/sharcnet 4515 2009-03-22 09:43:02 world-novirt1
 -rw-r--r-- sharcnet/sharcnet 29850 2009-03-22 09:28:18 world-yesvirt0
 ...

Using the "-t" parameter instead of "-c" tells tar to list the contents of the file, rather than creating a new file. If you compressed your archive with "-z", you will also need to include this in the command to list it's contents.


Lastly, to extract the files into your current directory, you would use:

[sharcnet@mako2:/scratch/sharcnet] tar -x -v -m -f /freezer/sharcnet/worlds-20090324.tar

In this case, the two changed parameters are as follows:

"-x" causes tar to extract the archived files into whatever directory you are currently in. In the above example, we are extracting them into the /scratch/sharcnet directory on the mako cluster.

"-m" is important to use when extracting archived files to /scratch, as it resets the creation time on those files to the current time, so that they will not accidentally be expired early while we are still using them.

Again, if your archive file was compressed with the "-z" option, you will need to include it in the command to extract the files.