[Fluent Inc. Logo] return to home search
next up previous contents index

31.3.1 Starting Parallel FLUENT on a Linux/UNIX System Using Command Line Options

To start the parallel version of FLUENT using command line options, you can use the following syntax in a command prompt window:

fluent version -t nprocs [-p interconnect ] [-mpi= mpi_type ] -cnf= hosts_file


For example, to use the Myrinet interconnect, and to start the 3D solver with 4 compute nodes on the machines defined in the text file called fluent.hosts, you can enter the following in the command prompt:

fluent 3d -t4 -pmyrinet -cnf=fluent.hosts

Note that if the optional -cnf= hosts_file is specified, a compute node will be spawned on each machine listed in the file hosts_file. (If you enter this optional argument, do not include the square brackets.)

The supported interconnects for parallel Linux/UNIX machines are listed below (Table  31.3.1, Table  31.3.2, and Table  31.3.3), along with their associated communication libraries, the corresponding syntax, and the supported architectures:

Table 31.3.1: Supported Interconnects for Linux/UNIX Platforms (Per Platform)
Platform Processor Architecture Interconnects/Systems*
Linux 32-bit


64-bit Itanium


ethernet (default), infiniband, myrinet
ethernet (default), infiniband, myrinet, crayx
ethernet (default), infiniband, myrinet, altix
Sun 32-bit
vendor** (default), ethernet
vendor** (default), ethernet
SGI 32-bit
vendor** (default), ethernet
vendor** (default), ethernet
HP 32-bit
64-bit PA-RISC
64-bit Itanium
vendor** (default), ethernet
vendor** (default), ethernet
vendor** (default), ethernet
IBM 32-bit
vendor** (default), ethernet
vendor** (default), ethernet
(*) Node processes on the same machine communicate by shared memory.
(**) vendor indicates a proprietary vendor interconnect. The specific proprietary interconnects that are supported are dictated by those that the vendor's MPI supports.

Table 31.3.2: Available MPIs for Linux/UNIX Platforms
MPI Syntax (flag) Communication Library Notes
hp -mpi=hp HP MPI General purpose for SMPs and clusters
intel -mpi=intel Intel MPI General purpose for SMPs and clusters
mpich2 -mpi=mpich2 MPICH2 MPI-2 implementation from Argonne National Laboratory. For both SMPs and Ethernet clusters
mpich -mpi=mpich MPICH1 Legacy MPI from Argonne National Laboratory
mpichmx -mpi=mpichmx MPICH-MX Only for Myrinet MX clusters
mvapich -mpi=mvapich MVAPICH Only for Infiniband clusters
sgi -mpi=sgi SGI MPI for Altix Only for SGI Altix systems (SMP); must start FLUENT on a system where parallel node processes are to run
cray -mpi=cray Cray MPI for XD1 Only for Cray XD1 systems
vendor -mpi=vendor Vendor MPI  
net -mpi=net socket  

Table 31.3.3: Supported MPIs for Linux/UNIX Architectures (Per Interconnect)
Architecture Ethernet Myrinet Infiniband Proprietary Systems
lnx86 hp (default), mpich2, net hp hp -
lnamd64 hp (default), intel, net hp (default), mpich-mx hp (default), intel, mvapich cray [for -pcrayx]
lnia64 hp (default), intel, net hp hp (default), intel sgi [for -paltix]
aix51_64 vendor (default), mpich, net - - vendor [for -pvendor]
hpux11_64 vendor (default), net - - vendor [for -pvendor]
hpux11_ia64 vendor (default), net - - vendor [for -pvendor]
irix65_mpis4_64 vendor (default), mpich, net - - vendor [for -pvendor]
ultra_64 vendor (default), mpich, net - - vendor [for -pvendor]
aix51 vendor (default), mpich, net - - vendor [for -pvendor]
hpux11 vendor (default), mpich, net - - vendor [for -pvendor]
irix65_mpis4 vendor (default), mpich, net - - vendor [for -pvendor]
ultra vendor (default), mpich, net - - vendor [for -pvendor]

next up previous contents index Previous: 31.3 Starting Parallel FLUENT
Up: 31.3 Starting Parallel FLUENT
Next: 31.3.2 Starting Parallel FLUENT
© Fluent Inc. 2006-09-20