SIGN-IN

Cluster orca.sharcnet.ca

Links System documentation in the SHARCNET Help Wiki

Manufacturer HP
Operating System CentOS 6.x
Interconnect QDR InfiniBand
Total processors/cores 8880
Nodes
orca: 1‑320
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 120 GB
orca: 321‑360
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.6 GHz
Type: Compute
Memory: 32.0 GB
Local storage: 430 GB
orca: 361‑388
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 64.0 GB
Local storage: 500 GB
orca: 389‑392
16 cores
2 sockets x 8 cores per socket
Intel Xeon @ 2.7 GHz
Type: Compute
Notes: Run time limited to four (4) hours for non-contribution users.
Memory: 128.0 GB
Local storage: 500 GB
orca: 9001‑9002
24 cores
2 sockets x 12 cores per socket
AMD Opteron @ 2.2 GHz
Type: Login
Memory: 24.0 GB
Local storage: 280 GB
Total attached storage 58.6 TB
Suitable use

Low latency parallel applications.

Software available

NAMD, GAUSSIAN, STAR-CCM+, LSDYNA, LAMMPS, CP2K, MATLAB, ESPRESSO, MODE, NCBIC++TOOLKIT, SIESTA, FREEFEM++, BLCR, ABAQUS, PYTHON, NWCHEM, OCTAVE, UTIL, CMAKE, MAP, DAR, R, PARI/GP, NETCDF, FFTW, GCC, CONVERGE, MAPLE, OPEN64, ACML, CHARM++, MERCURIAL, OPENMPI, PETSC_SLEPC, ABINIT, SUBVERSION, BLAST, ADF/BAND, HDF, INTEL, ORCA, SAMTOOLS, GIT, CDF, CPMD, OPENJDK, GNU , TINKER, AMBER, NCL, COMSOL, BOOST, BIOPERL, QD, GNUPLOT, MrBAYES, GROMACS, GMP, BINUTILS, PERL, SPRNG, MKL, BIOSAMTOOLS, MPFR, VIM, MPFUN90, VALGRIND, MPIBLAST, TEXLIVE, RLWRAP, MPFUN2015, MPC, FDTD, YT, DLPOLY, SUPERLU, PNETCDF, COREUTILS, IPM, GSL, BIOPERLRUN, SQ, ILOGCPLEX, PGI, OPENCV, LDWRAPPER, ARPACK-NG, EMACS, CPAN, RUBY, NIX, PROOT, GHC, VMD, ANSYS, AUTODOCKVINA, SPARK, GEANT4, LLVM, NINJA

Current system state details Graphs

Recent System Notices

Status Status Notes
Dec 16 2017, 01:28PM
(about 8 hours ago)

There is a problem with Orca’s login node – no one can login. We’re working to fix this as soon as possible.

Nov 24 2017, 03:23PM
(22 days ago)

The scratch filesystem on the Orca cluster has been restored to functioning, and is once again fully available for jobs. If you notice any further issues, please let us know by emailing help@sharcnet.ca

Nov 24 2017, 01:27PM
(22 days ago)

The Orca cluster is currently experiencing problems with its /scratch/ filesystem which may affect users. We are currently working to resolve the problem, and will post an update when more information is available.

Nov 10 2017, 05:17PM
(about 1 month ago)

Orca’s login node has a bad local disk, and may have caused some recent user-visible instability. We have it operating with a workaround that seems to be stable. We plan to replace the disk as well as return to having two login nodes.

Login node issues do not affect running or queued jobs.

Oct 26 2017, 04:58PM
(about 1 month ago)

Orca is now available after the network reconfiguration to address the frequent issues. All running jobs were killed. Please report any problems via https://www.sharcnet.ca/my/problems/submit or with email to help@sharcnet.ca

Sign-in to get full status history