The Lattice Boltzmann Simulation on Multi-GPU Systems

Thor Kristian Valderhaug
Department of Computer and Information Science, Norwegian University of Science and Technology
Norwegian University of Science and Technology, 2011


   title={The Lattice Boltzmann Simulation on Multi-GPU Systems},

   author={Valderhaug, T.K.},



Download Download (PDF)   View View   Source Source   



The Lattice Boltzmann Method (LBM) is widely used to simulate different types of flow, such as water, oil and gas in porous reservoirs. In the oil industry it is commonly used to estimate petrophysical properties of porous rocks, such as the permeability. To achieve the required accuracy it is necessary to use big simulation models requiring large amounts of memory. The method is highly data intensive making it suitable for offloading to the GPU. However, the limited amount of memory available on modern GPUs severely limits the size of the dataset possible to simulate. In this thesis, we increase the size of the datasets possible to simulate using techniques to lower the memory requirement while retaining numerical precision. These techniques improve the size possible to simulate on a single GPU by about 20 times for datasets with 15% porosity. We then develop multi-GPU simulations for different hardware configurations using OpenCL and MPI to investigate how LBM scales when simulating large datasets. The performance of the implementations are measured using three porous rock datasets provided by Numerical Rocks AS. By connecting two Tesla S2070s to a single host we are able to achieve a speedup of 1.95, compared to using a single GPU. For large datasets we are able to completely hide the host to host communication in a cluster configuration, showing that LBM scales well and is suitable for simulation on a cluster with GPUs. The correctness of the implementations is confirmed against an analytically known flow, and three datasets with known permeability also provided by Numerical Rocks AS.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: