Project Title: Toward Exascale Simulations of 3D Radiative Transfer for Cloudy Atmospheres
This project was successfully completed in September, 2011.
- Howard Barker, Environment Canada
- Jacon Cole, Environment Canada
- Philip Austin, UBC
SHARCNET programming support:
- Pawel Pomorski
- Alex Razoumov
The goal of this project is to modify an existing radiative transfer code (one of the components of climate and weather modelling used by the EC group) to make it fully parallel. The code uses a Monte Carlo approach to inject photons into the domain studied and compute their propagation.
The approach of the initial code is to supply each processor with the entire spatial domain and inject N photons per each parallel processor. This approach does result in a parallel speedup but suffers from memory limitations. Thus the first improvement to the code will be to decompose the spatial domain into subdomains which will be assigned to individual processors. This will require adding code to handle photons transferring between domains as they cross the boundary.
Once that is done, the second stage of the project will be to improve the above approach to ensure good load balancing between processors.
Safe physics assumptions
- Radiation propagates at the finite speed "c", so strictly speaking the equation of radiative transfer should include a time-dependent advection term (differential transfer equation) or the "retarded" energy (integral transfer equation). Our assumption is that "c" is so large that all changes in the radiation field can be assumed instantaneous, and those "c"-dependent terms can be dropped.
- The conservation equations of radiation hydrodynamics describe conservation of mass, momentum, energy. Ideally, they should be solved in one (perhaps, implicit) update. Operator-splitting them into two steps (hydrodynamics and transfer) is good enough for this problem, but it's worth keeping in mind that it's an approximation.
Approximations that we should worry about
- It is assumed that all variations in matter properties caused by radiation -- heating and the resulting changes in opacity/emissivity -- are not fast, i.e. they happen on a fluid flow timescale (or shorter). Therefore, these changes can be computed in the hydrodynamical update. I suspect in the cloud problem many radiation-driven changes happen on a timescale dictated by microphysics of radiation-matter interaction that is often very short, e.g., a cloud can be heated or evaporated by radiation before it can adjust hydrodynamically. This means that one needs to do radiative transfer/microphysics updates continuously throughout the hydrodynamical timestep, as opposed to doing it once per timestep. There are two methods: doing subcycling (transfer-microphysics-transfer-microphysics-...), or doing a simultaneous implicit solve.
- With Monte Carlo transfer the stochastic error goes down as 1/sqrt(numberOfPhotons), in other words, shooting 1000 times more photons will only lower the errors by a factor of ~30. In order to have a solution that is accurate to 8-9 significant digits, how many photons one would need to propagate at each timestep, 1e20? This is why proper discretization of the transfer equation in all dimensions (3 spatial coordinates, two angles, frequency) is vastly superior to Monte Carlo for error control. There are many methods -- ray tracing, direct Boltzmann solver, and various approximations (variable Eddington tensors, moment-limited approximations) -- but they clearly go well beyond the scope of this programming project. As a sanity check with Monte Carlo, we should be aware of the solution accuracy, e.g. how many photons do we need to launch to have 1% (or 1e-4, 1e-6) accuracy in the heating rates in each cell. Consequently, this number of photons will influence not only the CPU time, but also the amount of communication between the processors/nodes. While pursuing decent accuracy on larger grids, the problem can be quickly dominated by passing photon buffers between processors and the associated MPI bottlenecks.
- Domain decomposition of radiative transfer might present its own problems. For example, if there are any radiation-driven fronts (I am thinking about ionization fronts in astrophysics, but coupling radiation to hydro might give rise to its own instabilities and fronts propagating at finite speeds), the order in which photons are passed between processors may become important. For example, the domain on the left may or may not have some important instability that will affect transfer in the domain on the right, on the timescale faster than the fluid flow. Then doing transfer in the domain on the right may be a waste of CPU time, until we have the solution from the domain on the left. Couple that with the fact that there is no left and right, but two angles (theta, phi) describing the direction of photon propagation, and you'll see how complex this problem can become.
Problem input files are found on narwhal in:
The code has to be compiled separately for each test run, as the problem sizes differ. To find out the problem size from the input files, execute:
then edit domain.f90 to use these parameters by changing nx_gl, ny_gl and nz_gl .
The current git repository version is set to use 1 processor, but the code still uses MPI, so submit it to the MPI queue. The libraries have been configured for the saw cluster (modify to run on other clusters). Compile on saw with:
To run with more than 1 processor, modify nsubdomains_x and nsubdomains_y in domain.f, then run with the number of processors equal to nsubdomains_x*nsubdomains_y.
The following modules need to be loaded before compiling and running on redfin.
module unload intel module unload openmpi module load intel/11.1.069 module load openmpi/intel/1.4.3 module load pnetcdf/intel/1.2.0