SIGN-IN

Cluster orca.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect QDR InfiniBand
Total processors/cores 8880
Nodes
orca: 1‑320
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 120 GB
orca: 321‑360
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.6 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 430 GB
orca: 361‑388
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 64.0 GB
Local storage: 500 GB
orca: 389‑392
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 128.0 GB
Local storage: 500 GB
orca: 9001‑9002
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 24.0 GB
Local storage: 280 GB
Total attached storage 58.6 TB
Suitable use

Low latency parallel applications.

Software available

NAMD, GAUSSIAN, STAR-CCM+, LSDYNA, LAMMPS, CP2K, MATLAB, GCC, ESPRESSO, MODE, NCBIC++TOOLKIT, FDTD, SIESTA, FREEFEM++, BLCR, ABAQUS, PYTHON, NWCHEM, OCTAVE, UTIL, CMAKE, MAP, DAR, SPARK, R, PARI/GP, NETCDF, FFTW, CONVERGE, ANSYS, OPEN64, ACML, CHARM++, MERCURIAL, OPENMPI, PETSC_SLEPC, ABINIT, SUBVERSION, BLAST, ADF/BAND, HDF, INTEL, BOOST, ORCA, SAMTOOLS, GIT, CDF, MAPLE, CPMD, OPENJDK, GNU , TINKER, AMBER, NCL, COMSOL, GDB, BIOPERL, QD, GNUPLOT, MrBAYES, GROMACS, GMP, BINUTILS, PERL, SPRNG, MKL, BIOSAMTOOLS, MPFR, VIM, MPFUN90, VALGRIND, MPIBLAST, TEXLIVE, RLWRAP, MPFUN2015, MPC, YT, DLPOLY, SUPERLU, PNETCDF, COREUTILS, IPM, GSL, BIOPERLRUN, SQ, ILOGCPLEX, PGI, OPENCV, LLVM, LDWRAPPER, ARPACK-NG, EMACS, CPAN, RUBY, NIX, MONO, PROOT, GHC, VMD, SYSTEM, AUTODOCKVINA, GEANT4, NINJA

Current system state details Graphs

Recent System Notices

Status Status Notes
Oct 02 2018, 02:28PM
(14 days ago)

Orca is slowly being converted into a new cluster, similar to Graham, with the same software, scheduler and storage.

orca.sharcnet.ca will slowly shrink as nodes are moved over to the ‘new’ cluster.

The new cluster is accessed via orca.computecanada.ca

Users will connect to the new cluster using the same credentials as you do for accessing Graham. Orca will have the same software, home and project space as Graham but will have its own scratch space.

The conversion will begin Oct 4th and will continue for about 2 weeks. During this time both clusters will be available but users should can expect minor issues as we perform the transition.

A new update will be posted once the conversion is complete.

Aug 20 2018, 09:33AM
(about 1 month ago)

Orca is back online after urgent electrical repairs on the main feed to the server room.

Any jobs running at the time of the outage were killed, and will need to be restarted.

Aug 20 2018, 12:18AM
(about 1 month ago)

Orca will be offline on Monday the 20th of August from midnight to noon due to urgent repairs to the electrical feed for the server room.

Any jobs running at the time were killed, and will need to be restarted.

Aug 14 2018, 03:53PM
(2 months ago)

Orca will be offline on Monday the 20th of August from midnight to noon due to urgent repairs to the electrical feed for the server room.

Any jobs running at the time will be killed, and will need to be restarted.

Jul 13 2018, 10:33AM
(3 months ago)

Orca will be reconfigured soon to provide an updated environment nearly identical to Graham/Cedar, and using ComputeCanada account credentials.

We have the new cluster operating now (so “legacy” Orca is operating with a somewhat fewer nodes), and will be opened for general use in 1-2 weeks.

Sign-in to get full status history