[Fluent Inc. Logo] return to home search
next up previous contents index

22.11.9 Parallel Processing for the Discrete Phase Model

FLUENT offers two modes of parallel processing for the discrete phase model: the Shared Memory and the Message Passing options under the Parallel tab, in the Discrete Phase Model panel. The Shared Memory option is suitable for computations where the machine running the FLUENT host process is an adequately large, shared memory, multiprocessor machine. The Message Passing option is turned on by default and is suitable for generic distributed memory cluster computing.

figure   

When tracking particles in parallel, the DPM model cannot be used with any of the multiphase flow models (VOF, mixture, or Eulerian) if the Shared Memory option is enabled. (Note that using the Message Passing option, when running in parallel, enables the compatibility of all multiphase flow models with the DPM model.)

The Shared Memory option is implemented using POSIX Threads ( pthreads) based on a shared memory model. Once the Shared Memory option is enabled, you can then select along with it the Workpile Algorithm and specify the Number of Threads. By default, the Number of Threads is equal to the number of compute nodes specified for the parallel computation. You can modify this value based on the computational requirements of the particle calculations. If, for example, the particle calculations require more computation than the flow calculation, you can increase the Number of Threads (up to the number of available processors) to improve performance. When using the Shared Memory option, the particle calculations are entirely managed by the FLUENT host process. You must make sure that the machine executing the host process has enough memory to accommodate the entire grid.

figure   

Note that the Shared Memory option is not available for Windows 2000.

The Message Passing option enables cluster computing and also works on shared memory machines. With this option enabled, the compute node processes perform the particle work on their local partitions. Particle migration to other compute nodes is implemented using message passing primitives. There are no special requirements for the host machine. Note that this model is not available if the Cloud Model option is turned on under the Turbulent Dispersion tab of the Set Injection Properties panel. When running FLUENT in parallel, by default, pathline displays are computed in serial on the host node. Pathline displays may be computed in parallel on distributed memory systems if the Message Passing parallel option is selected in the Discrete Phase Model panel.

You may seamlessly switch between the Shared Memory option and the Message Passing option at any time during the FLUENT session.

In addition to performing general parallel processing of the Discrete Phase Model, you have the option of implementing DPM-specific user-defined functions in parallel FLUENT. Click here to go to the UDF Manual for details on parallelization of DPM UDFs.


next up previous contents index Previous: 22.11.8 User-Defined Functions
Up: 22.11 Steps for Using
Next: 22.12 Setting Initial Conditions
© Fluent Inc. 2006-09-20