8315

Towards accelerating Smoothed Particle Hydrodynamics simulations for free-surface flows on multi-GPU clusters

Daniel Valdez-Balderas, Jose M. Dominguez, Benedict D. Rogers, Alejandro J.C. Crespo
Modelling and Simulation Centre (MaSC), School of Mechanical, Aeroespace & Civil Engineering, University of Manchester, Manchester, M13 9PL, UK
arXiv:1210.1017 [physics.comp-ph] (3 Oct 2012)

@article{2012arXiv1210.1017V,

   author={Valdez-Balderas, Daniel and Dominguez, Jose M. and Rogers, Benedict D. and Crespo, Alejandro J.C.},

   title={Towards accelerating Smoothed Particle Hydrodynamics simulations for free-surface flows on multi-GPU clusters},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1210.1017},

   primaryClass={"physics.comp-ph"},

   keywords={Computational Physics; Instrumentation and Methods for Astrophysics; Other Condensed Matter; Fluid Dynamics},

   year={2012},

   month={oct}

}

Download Download (PDF)   View View   Source Source   

2310

views

Starting from the single graphics processing unit (GPU) version of the Smoothed Particle Hydrodynamics (SPH) code DualSPHysics, a multi-GPU SPH program is developed for free-surface flows. The approach is based on a spatial decomposition technique, whereby different portions (sub-domains) of the physical system under study are assigned to different GPUs. Communication between devices is achieved with the use of Message Passing Interface (MPI) application programming interface (API) routines. The use of the sorting algorithm radix sort for inter-GPU particle migration and sub-domain halo building (which enables interaction between SPH particles of different subdomains) is described in detail. With the resulting scheme it is possible, on the one hand, to carry out simulations that could also be performed on a single GPU, but they can now be performed even faster than on one of these devices alone. On the other hand, accelerated simulations can be performed with up to 32 million particles on the current architecture, which is beyond the limitations of a single GPU due to memory constraints. A study of weak and strong scaling behaviour, speedups and efficiency of the resulting programis presented including an investigation to elucidate the computational bottlenecks. Last, possibilities for reduction of the effects of overhead on computational efficiency in future versions of our scheme are discussed.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: