10504

Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs

Mark Mawson, Alistair Revell
University of Manchester, Oxford Road, Manchester, Greater Manchester, M13 9PL, UK
arXiv:1309.1983 [cs.DC], (8 Sep 2013)

@article{2013arXiv1309.1983M,

   author={Mawson}, M. and {Revell}, A.},

   title={"{Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1309.1983},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing},

   year={2013},

   month={sep},

   adsurl={http://adsabs.harvard.edu/abs/2013arXiv1309.1983M},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

3364

views

The Lattice Boltzmann method (LBM) for solving fluid flow is naturally well suited to an efficient implementation for massively parallel computing, due to the prevalence of local operations in the algorithm. This paper presents and analyses the performance of a 3D lattice Boltzmann solver, optimized for third generation nVidia GPU hardware, also known as `Kepler’. We provide a review of previous optimisation strategies and analyse data read/write times for different memory types. In LBM, the time propagation step (known as streaming), involves shifting data to adjacent locations and is central to parallel performance; here we examine three approaches which make use of different hardware options. Two of which make use of `performance enhancing’ features of the GPU; shared memory and the new shuffle instruction found in Kepler based GPUs. These are compared to a standard transfer of data which relies instead on optimised storage to increase coalesced access. It is shown that the more simple approach is most efficient; since the need for large numbers of registers per thread in LBM limits the block size and thus the efficiency of these special features is reduced. Detailed results are obtained for a D3Q19 LBM solver, which is benchmarked on nVidia K5000M and K20C GPUs. In the latter case the use of a read-only data cache is explored, and peak performance of over 1036 Million Lattice Updates Per Second (MLUPS) is achieved. The appearance of a periodic bottleneck in the solver performance is also reported, believed to be hardware related; spikes in iteration-time occur with a frequency of around 11Hz for both GPUs, independent of the size of the problem.
Rating: 2.4/5. From 4 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: