|This page is scheduled for deletion because it is either redundant with information available on the CC wiki, or the software is no longer supported.|
- 1 Introduction
- 2 Gaussian on Graham
- 3 Gaussian on Orca
- 4 General Notes
- 4.1 NBO6
- 4.2 Restart
- 4.3 %mem vs --mpp
- 4.4 Others
- 5 References
|Description: Chemistry, Electronic Structure Programs|
|SHARCNET Package information: see Gaussian software page in web portal|
|Full list of SHARCNET supported software|
Gaussian 03, 09 and 16 are available on several clusters, G16 is the latest in the Gaussian series of electronic structure programs. It can perform density functional theory (including time dependent), Hartree-Fock (including time dependent), Moller-Plesset, coupled-cluster, and configuration interaction calculations. Gaussian is used widely to predict the energies, molecular geometries, and vibrational frequencies of molecular systems, along with numerous molecular properties derived from these basic computation types.
Gaussian on Graham
Gaussian g16.b01, g16.a03, g09.e01 and g03.d01 are installed on the newest cluster Graham with modules. NBO6 is available in g16.b01 and g09.e01 versions.
Graham uses your ComputeCanada(CC) user account, not your Sharcnet account anymore. Gaussian is managed with group 'soft_gaussian' on Graham. User has to request access to Gaussian on Graham by email to
with a Subject like: Graham, request access to Gaussian. The email body should include the license agreement terms
To join the 'soft_gaussian' user group on Graham, I accept Gaussian license agreements: 1) I am not a member of a research group developing software competitive to Gaussian. 2) I will not copy the Gaussian software, nor make it available to anyone else. 3) I will properly acknowledge Gaussian Inc. and Compute Canada in publications. 4) I will notify Compute Canada of any change in the above acknowledgement. Followed by your CC userid.
User Environment Setup
Each research group has a default account with your supervisor's userid (PI'sid) named def-PI'sid (type 'groups' on Graham will show the groups you are in).
If this is your first time to use Graham, you can add the following line to your .bash_profile file:
export SLURM_ACCOUNT=def-supervisor export SBATCH_ACCOUNT=$SLURM_ACCOUNT export SALLOC_ACCOUNT=$SLURM_ACCOUNT
To use Gaussian 16:
module load gaussian/g16.b01
To use Gaussian 09:
module load gaussian/g09.e01
To use Gaussian 03:
module load gaussian/g03.d01
Graham uses Slurm scheduler, which is different from the sq command used on other Sharcnet clusters.
Besides your input name.com file, you have to prepare a job script in the same input file directory to define the compute resources for the job.
There are Two Options (job scripts) to run your Gaussian job on Graham.
G16 (G09, G03)
G16 will save the default runtime files (unnamed .rwf, .inp, .d2e, .int, .skr files) to /scratch/username/jobid/. Those files will stay there when the job is done but unsuccessful (unfinished due to run out of time or failed for whatever reason), this option would be able to locate the .rwf file for restart purpose anytime later.
Example G16 job script, name.sh:
#!/bin/bash #SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --time=00-20:00 # expect run time (DD-HH:MM) #SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file #SBATCH --output=name.log # output file module load gaussian/g16.b01 # module load the gaussian version G16 name.com # G16 command, input: name.com, output: name.log by default
To use Gaussian 09 or Gaussian 03, simply modify the module load line to gaussian/g09.e01 or gaussian/g03.d01 and change G16 to G09 or G03 in name.sh. You can modify the --mem, --time, and --cpus-per-task to match your job's requirements for compute resources.
g16 (g09, g03)
This option will save the default runtime files (unnamed .rwf, .inp, .d2e, .int, .skr files) temporarily in $SLURM_TMPDIR (/localscratch/username.jobid.0/) on the compute node where the job was scheduled to. The files will be removed by the scheduler when a job is done (successful or not). If you do not expect to use the .rwf file to restart in a later time, you can use this option.
/localscratch is ~800G shared by all jobs running on the same node. If your job files would be bigger than or close to that size range, you would instead use the G16 (G09, G03) command described above.
Example g16 job script, e.g., name.sh:
#!/bin/bash #SBATCH --mem=16G # memory, roughly 2 times %mem defined in the input name.com file #SBATCH --time=00-20:00 # expect run time (DD-HH:MM) #SBATCH --cpus-per-task=16 # No. of cpus for the job as defined by %nprocs in the name.com file module load gaussian/g16.b01 # module load line g16 < name.com >& name.log # g16 command, input: name.com, output: name.log
To use Gaussian 09 or Gaussian 03, simply modify the module load line to gaussian/g09.e01 or gaussian/g03.d01 and change g16 to g09 or g03 in name.sh. You can modify the --mem, --time, and --cpus-per-task to match your job's requirements for compute resources.
Sample *.sh and *.com files can be found on Graham in
/home/jemmyhu/tests/test_Gaussian/g16 or /home/jemmyhu/tests/test_Gaussian/g09
Change name.sh file permission to be executable, i.e.,
chmod 750 name.sh
Submit the job using sbatch
For a different job, you just need to copy name.sh to a different filename, i.e., name1.sh paired with name1.com and name1.log in order to run input file name1.com.
Chechk for job status
squeue -u userid #check your own jobs in the queue or sacct #your job history
You can run interactive Gaussian job for testing purpose on Graham. It's not a good practice to run interactive Gaussian jobs on a login node. You can start an interactive session on a compute node with salloc, the example for an hour, 8 cpus and 10G memory Gaussian job is like
Goto the input file directory first, then use salloc command: [jemmyhu@gra-login2 tests]$ salloc --time=1:0:0 --cpus-per-task=8 --mem=10g # allocate on a compute node with jobid salloc: Granted job allocation 93288 [jemmyhu@gra798 tests]$ module load gaussian/g16.b01 [jemmyhu@gra798 tests]$ G16 g16_test2.com # G16 saves runtime file (.rwf etc.) to /scratch/yourid/93288/ or [jemmyhu@gra798 tests]$ g16 < g16_test2.com >& g16_test2.log & # g16 saves runtime file to /localscratch/yourid/ when it's done, or looks ok for the input test, terminate the session. [jemmyhu@gra798 tests]$ exit exit salloc: Relinquishing job allocation 93288 [jemmyhu@gra-login2 tests]$
Gaussian on Orca
Orca (orca.computecanada.ca) has been transferred to have the same user environment as Graham.
The way to run g09 is the same as on Graham. But g16 would be slightly different
Example G16 job script, name_G16.sh:
#!/bin/bash #SBATCH --account=youraccount # PI's group acount #SBATCH --mem=8G # memory amount, roughly 2 times %mem #SBATCH --time=00-02:00 # time (DD-HH:MM) #SBATCH --output=name_G16.log # output file #SBATCH --cpus-per-task=12 # No. of cpus as defined by %nprocs module purge --force export MODULEPATH=/opt/sharcnet/modules module load gaussian/g16.b01 G16 name_G16.com # G16 command
This is to use the original g16 installation on the old orca as the g16 binary on Graham does not run on orca hardware.
Example input and .sh files to start a G16 job can be found in
The default NBO in Gaussian is NBO-3.1. There is no update for NBO in G09 because NBO becomes a separate license package. SHARCNET purchased NBO6 site license. We added NBO6 into g09_D.01 and g09_E.01 on SHARCNET systems.
In g09, the default 'pop=nbo' keyword is for NBO 3, to use NBO 6, replace 'nbo' with 'nbo6', i.e.,
Accordingly, use nbo6read, nbo6del, etc..
For detail NBO6 usage and examples, please check NBO6
You can find some simple g09-NBO6 examples on the clusters under /opt/sharcnet/gaussian/g09_D.01/nbo6/g09tests/
Gaussian usage related to NBO can be found at Gaussian_population
Please note that the license does not include NBOview, a GUI application. If you are interested in NBOview, please contact NBO6 software provider for the separate NBOview license.
Gaussian job can always be restarted.
The geometry optimization job can be restarted from the .chk file as usual.
The one-step computation, such as Analytic frequency calculations, including properties like ROA and VCD with ONIOM; CCSD and EOM-CCSD calculations; NMR; Polar=OptRot; CID, CISD, CCD, QCISD and BD energies, can be restarted from .rwf file. For details please check Gaussian Restart
To restart a job from .chk or .rwf, you need to know where the file is from previous run. By default, .chk is in the directory where you submitted the job but .rwf is in /scratch/yourid/jobid/. You have to move *.rwf file from /scratch/youris/jobid/ to /scratch/yourid/ to restart the previous job from that .rwf file. The following examples specify the .rwf file to your /scratch (/scratch/yourid/*.rwf) for start and restart.
The sample input files to start and restart a 'freq' job could be. Please always leave a blank line at the end of the .com file (you can find real input files in /home/jemmyhu/tests/test_g09/Restart/):
Sample input file (example.com) for a freq job
%rwf=/scratch/yourid/example.rwf %NoSave %chk=example.chk %mem=500mb %nprocs=4 #p freq ... << my example.com >> input continues ...
Restart input file (example_restart.com) to restart the freq job
%rwf=/scratch/yourid/example.rwf %NoSave %chk=example.chk %mem=500mb %nprocs=4 #p restart
The sample input files to start and restart a 'Geom Opt' job could be
Sample input file (example.com) for a Gemo Opt job
%NoSave %chk=example.chk %mem=500mb %nprocs=4 # B3LYP/DGDZVP Opt << my example.com >> input continues ...
Restart input file (example_restart.com) to restart the Geom Opt job
%NoSave %chk=example.chk %mem=500mb %nprocs=4 # B3LYP/DGDZVP Opt Geom=AllCheck Guess=Read
%mem vs --mpp
--mpp in sqsub is to reserve the total memory for a job. %mem is the memory setting in gaussian's input file for a single processor, normally the memory needed for the gaussian job to start. If %mem is not specified gaussian applies a default value of 256mb described here.
The maximum memory in --mpp is 15g on saw for a 8-way gaussian job, it could be 20g for a 8-way job on orca. However, in practice it is not the bigger the better. With --mpp right above the job's total memory needs will give the optimal performance. Please use a portion of the memory for different size gaussian jobs such as use up to half of the max memory for a 4-way gaussian job on saw and orca.
Because a gaussian job will need more memory than what specified via %mem in the input file, --mpp should be larger than %mem. Normally --mpp is roughly double the size used in %mem (it is not %mem times threads). The following guideline is from observation and it should work fine on SHARCNET systems
%mem --mpp <=600MB 2g 600-1500MB 4g 1500-2500MB 6g 2500-3500MB 8g
1. Go to Gaussian 09 Efficiency Considerations page for memory estimate (%mem), etc. for a job. 2. Based on our benchmark results, we recommend -n 8 for MPx and DFT types of jobs, and always 1 cpu (serial) job for CI (CIS, CISD, QCISD, etc) and CC (CCD, CCSD, etc) based methods on SHARCNET systems. 3. Use appropriate -r runtime and --mpp=memory in sqsub. Maximum runtime is 7 days, Maximum memory is 15g on saw, 10g (or occasionally 20g) on orca for 8-cpus gaussian jobs 4. Number of cpus (-n cpus) used in sqsub should match the %nprocs=cpus specified in the input *.com or *.gjf file (default is 1) 5. Run job out of your /scratch or /work directory 6. Delete *.core and the related *.rwf, *.d2e, *.int, and *.scr files manually once your job was terminated abnormally (delete subdirectory 'jobid' from your /scratch when the job with jobid is finished). 7. Gaussian utilities such as formchk, cubegen, etc. can run from command line commands. Please make sure which version is used by typing 'which formchk', if the version is correct, simply do formchk name.chk otherwise, you can type the full path such as for g09_C.01 /opt/sharcnet/gaussian/g09_C.01/formchk name.chk it will generate a file 'name.fchk' in the directory
o Official Gaussian Website
- More detail info, see gaussian software page in web portal