From Documentation
Jump to: navigation, search

This wiki page provides a editable list of development stage software not currently installed on sharcnet. A template structure is provided for adding information by staff and regular users alike. Please add a new package name for further consideration.

Template

Homepage: Package website.
Introduction: Short description.
Article: External information link.
Maturity: Alpha, Beta, Release Candidate, Production
Support: Community Mailing List, None, Paid.
Cost: Describe charging structure.
Parallelism: Serial, Threaded, Mpi, Hybrid, Other.
Compatibility: Required architecture.
Document: Internal file available for download.
Variant: Commercial or other spinoffs.
Review: Comments about suitability for sharcnet.
Repeat: Any of above six objects as required.

Haskell

Homepage: http://www.haskell.org/haskellwiki/Haskell
Article: http://en.wikipedia.org/wiki/Haskell_(programming_language)

Fortress

Homepage: https://projectfortress.java.net/

Chapel

Homepage: http://chapel.cray.com/index.html

x10

Homepage: http://x10-lang.org/

Cilk

Homepage: http://supertech.csail.mit.edu/cilk/

Co-Array Fortran

Homepage: http://www.co-array.org/

ScaleMP

Homepage: http://www.scalemp.com/

Intel Cluster OpenMP

Homepage: http://software.intel.com/en-us/articles/cluster-openmp-for-intel-compilers
Article: http://archive.hpcwire.com/hpc/658711.html
Review: In 2004 merz tested with an OpenMP MagnetoHydrodynamic Solver. While the code scaled well using just OpenMP, it actually experienced a performance impact when running via cluster openmp. Applications that pore through large amounts of data to extract information are especially well-suited for Cluster OpenMP. This includes programs that scale successfully with OpenMP on SMP, have good data locality and that use few locks and synchronization. basically anything embarrassingly parallel.

Unified Parallel C (UPC)

Homepage: http://upc.gwu.edu/
Introduction: UPC is an explicit parallel extension of ANSI C and is based on the partitioned global address space programming model (AC, Split-C, PCP).
Document: Sharcnet slides Unified Parallel C (UPC)
Variant: [HP Unified Parallel C (UPC) http://h30097.www3.hp.com/upc/index.htm]

Global Arrays

Homepage: http://www.emsl.pnl.gov/docs/global/
Introduction: Global arrays is an MPI-based toolkit for working with large dense multi-dimensional arrays on distributed memory machines. Review: In 2003-2006 razoumov used it in a neutrino transport code in which large (7D, multi-GB) neutrino-matter interaction tables could not fit into memory on a single processor and were kept in simulated "shared memory" with GA.