[Fluent Inc. Logo] return to home search
next up previous contents index

31.1 Introduction to Parallel Processing

The FLUENT serial solver manages file input and output, data storage, and flow field calculations using a single solver process on a single computer. FLUENT's parallel solver allows you to compute a solution by using multiple processes that may be executing on the same computer, or on different computers in a network. Figures  31.1.1 and 31.1.2 illustrate the serial and parallel FLUENT architectures.

Parallel processing in FLUENT involves an interaction between FLUENT, a host process, and a set of compute-node processes. FLUENT interacts with the host process and the collection of compute nodes using a utility called cortex that manages FLUENT's user interface and basic graphical functions.

Figure 31.1.1: Serial FLUENT Architecture
figure

Figure 31.1.2: Parallel FLUENT Architecture
figure

Parallel FLUENT splits up the grid and data into multiple partitions, then assigns each grid partition to a different compute process (or node). The number of partitions is an integral multiple of the number of compute nodes available to you (e.g., 8 partitions for 1, 2, 4, or 8 compute nodes). The compute-node processes can be executed on a massively-parallel computer, a multiple-CPU workstation, or a network cluster of computers.

figure   

In general, as the number of compute nodes increases, turnaround time for the solution will decrease. However, parallel efficiency decreases as the ratio of communication to computation increases, so you should be careful to choose a large enough problem for the parallel machine.

FLUENT uses a host process that does not contain any grid data. Instead, the host process only interprets commands from FLUENT's graphics-related interface, cortex.

The host distributes those commands to the other compute nodes via a socket interconnect to a single designated compute node called compute-node-0. This specialized compute node distributes the host commands to the other compute nodes. Each compute node simultaneously executes the same program on its own data set. Communication from the compute nodes to the host is possible only through compute-node-0 and only when all compute nodes have synchronized with each other.

Each compute node is virtually connected to every other compute node, and relies on inter-process communication to perform such functions as sending and receiving arrays, synchronizing, and performing global operations (such as summations over all cells). Inter-process communication is managed by a message-passing library. For example, the message-passing library could be a vendor implementation of the Message Passing Interface (MPI) standard, as depicted in Figure  31.1.2.

All of the parallel FLUENT processes (as well as the serial process) are identified by a unique integer ID. The host collects messages from compute-node-0 and performs operations (such as printing, displaying messages, and writing to a file) on all of the data, in the same way as the serial solver.



Recommended Usage of Parallel FLUENT


The recommended procedure for using parallel FLUENT is as follows:

1.   Start up the parallel solver. See Section  31.2 and Section  31.3 for details.

2.   Read your case file and have FLUENT partition the grid automatically upon loading it. It is best to partition after the problem is set up, since partitioning has some model dependencies (e.g., adaption on non-conformal interfaces, sliding-mesh and shell-conduction encapsulation).

Note that there are other approaches for partitioning, including manual partitioning in either the serial or the parallel solver. See Section  31.5 for details.

3.   Review the partitions and perform partitioning again, if necessary.
See Section  31.5.6 for details on checking your partitions.

4.   Calculate a solution. See Section  31.6 for information on checking and improving the parallel performance.


next up previous contents index Previous: 31. Parallel Processing
Up: 31. Parallel Processing
Next: 31.2 Starting Parallel FLUENT
© Fluent Inc. 2006-09-20