From Documentation
Jump to: navigation, search
Note: Some of the information on this page is for our legacy systems only. The page is scheduled for an update to make it applicable to Graham.



SHARCNET has a number of visualization workstations installed at particular partner institutions. Please see the Visualization Systems list in the web portal for further information.

Disk Space

The disk space on the visualization workstations is divided into three groupings, much like on the Sharcnet clusters: The user's home directory, their work directory, and their scratch directory. Currently, the disposition of these directories is as follows:

  • /home/USER: This is the user's home directory, as used across all of the Sharcnet clusters. The space limit for it is currently 10GB, and once it is full, you will not be able to create more files in it - you should use your home directory to hold configuration files and environment settings. This directory is remotely mounted from the login node of the Bull cluster, and if that system is not available, a temporary "local" home directory will be created - do not store important files there, as you will lose access to them once your real home directory becomes available!
  • /work/USER: This is the user's /work/ directory, currently mounted from the SHARCNET Global Work directories. Standard quota on /work/ is 1000GB, and files you will need to have follow you around, such as input data or program files, should go here. Files placed in your /work/ on one visualization workstation will be in your /work/ directory on all visualization workstations. Again, as this directory is remotely mounted from the Global work, if that system is down, it may become temporarily unavailable.
  • /scratch/USER: This is a local, high speed, high(er) capacity storage space, separate for each visualization workstation. It is not remotely mounted, which means that access to it is somewhat faster than the other directories, but if you move from one workstation to another, the contents of your /scratch/ directory will not follow you. There is no quota on the /scratch/ directories in the visualization stations, and you can use up to whatever space is available on the machine for it, however bear in mind that as with the clusters, very old files may be expired, and there are more than one person who uses these machines - leaving some space for others is polite, so you should remember to clean up un-needed files.

Remote storage

  • scratch on other SHARCNET clusters can be mounted on the workstations. For example, to mount your /scratch directory on cluster saw you should execute:
mkdir /scratch/$USER/saw
sshfs $$USER /scratch/$USER/saw

After providing the password, the directory will be mounted in /scratch/$USER/saw and can be used directly. This is very useful if you need to do interactive visualization and your large datafiles are in /scratch on one of the clusters.

Remote Access

The visualization workstations can be accessed via VNC. Please see the Remote Graphical Connections to SHARCNET page for more information.


Problems with /home

The visualization workstations mount your home directory from a secondary system. If the Vizualization workstation is not able to contact the secondary system, it creates a new "local" home directory on the local machine, which allows you to still access the machine, despite your real home directory not being available.

If you create files in the temporary home directory , and subsequently the secondary system becomes available, your real home directory will be mounted over top of the temporary home, rendering those files inaccessible. Please email to recover these inaccessible files.

The best way to correct this problem on a Visualization workstation is to log out of the machine completely, and then log back in.

Checking system load

The visualization workstations have no queues, and any user can run their programs from the command line. Unfortunately, this means that two users could potentially be competing for computational resources on the same machine. To check if the machine you logged into is busy, run standard Linux commands like w or top. If you find the machine is busy, it may be a good idea to switch to one that is not.

The same consideration applies to the GPU cards installed on a viz station machine. They are also shared between users. If you are trying to benchmark code, it is important to check that another user is not running something on the GPU.

On systems with NVIDIA cards (most of them) you can check the load on the GPU with command:


Look at GPU-Util entry, which should show % load on GPU. If that is listed as N/A (not available), you can get some idea of the load by looking at card memory usage and temperature.

Here is the way to check GPU load on machines which have ATI cards installed:

export DISPLAY=:0 
xhost + 
aticonfig --adapter=all --od-getclocks