SIGN-IN

Cluster orca.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect QDR InfiniBand
Total processors/cores 8880
Nodes
orca: 1‑320
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 120 GB
orca: 321‑360
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.6 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 430 GB
orca: 361‑388
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 64.0 GB
Local storage: 500 GB
orca: 389‑392
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 128.0 GB
Local storage: 500 GB
orca: 9001‑9002
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 24.0 GB
Local storage: 280 GB
Total attached storage 58.6 TB
Suitable use

Low latency parallel applications.

Software available

FDTD, GCC, UTIL, GSL, OCTAVE, MPFUN2015, PETSC_SLEPC, FFTW, SQ, MATLAB, SAMTOOLS, BIOPERL, OPENCV, EMACS, OPENMPI, INTEL, VIM, NETCDF, MPFR, GROMACS, PYTHON, BIOSAMTOOLS, BINUTILS, BOOST, SPARK, CPAN, LAMMPS, FREEFEM++, NCL, NCBIC++TOOLKIT, GHC, BLCR, R, PERL, MPIBLAST, BIOPERLRUN, MrBAYES, GMP, SYSTEM, ABINIT, SPRNG, SUBVERSION, OPEN64, NAMD, MPC, PNETCDF, CPMD, NWCHEM, MAP, CP2K, BLAST, GNUPLOT, ESPRESSO, COREUTILS, TEXLIVE, GIT, LLVM, ADF/BAND, DLPOLY, OPENJDK, CMAKE, HDF, SUPERLU, CONVERGE, TINKER, PROOT, ACML, GDB, MONO, MODE, IPM, AUTODOCKVINA, MPFUN90, NIX, GNU , DAR, CDF, PARI/GP, SIESTA, NINJA, QD, ORCA, RUBY, CHARM++, YT, PGI, MKL, LDWRAPPER, VALGRIND, MERCURIAL, RLWRAP, ILOGCPLEX, ARPACK-NG, GEANT4, VMD, MAPLE

Recent System Notices

Status Status Notes
May 07 2019, 05:21PM
(5 months ago)

orca has been decommissioned

Mar 29 2019, 02:19PM
(7 months ago)

orca has been decommissioned. Login nodes will be left running for 2 weeks so you can copy any last-minute data from /scratch, but the compute nodes and scheduler are shut down and no more jobs will run.

Mar 25 2019, 04:48PM
(7 months ago)

/project and /scratch are available again.

Due to its age, orca is still scheduled to be be decommissioned on March 29, 2019.

After March 29 any data you have in /scratch will become inaccessible; anything you want to keep should be copied elsewhere as soon as possible. Data in /home and /project is shared with Graham and will be unaffected.

If you have not already done so we strongly recommend that you move all of your computing to the new national systems Graham, Cedar and Niagara.

Mar 25 2019, 04:11PM
(7 months ago)

orca is currently unable to access /project or /scratch directories, we’re working on it.

Feb 21 2019, 02:48PM
(8 months ago)

Due to their age, the legacy clusters orca, windeee, goblin, shadowfax, copper, and monk will be decommissioned on March 29, 2019. Some nodes from goblin, shadowfax, and copper will be transferred to a new cluster, the affected contributors will be contacted separately.

After March 29 any data you have in /scratch on these clusters will become inaccessible; anything you want to keep should be copied elsewhere as soon as possible. For orca, data in /home and /project is shared with Graham and will be unaffected. For the other clusters, data in /home and /work will remain accessible for a few months via dtn.sharcnet.ca.

If you have not already done so we strongly recommend that you move all of your computing to the new national systems Graham, Cedar and Niagara. See this URL for help: https://docs.computecanada.ca/wiki/Getting_Started

Sign-in to get full status history