Scalable Simulation of Tsunamis Generated by Submarine Landslides on GPU clusters
Dpto. de Lenguajes y Sistemas Informaticos, Universidad de Granada, Spain
Universidad de Granada, 2012
@article{de2012scalable,
title={Scalable Simulation of Tsunamis Generated by Submarine Landslides on GPU clusters},
author={de la Asunci{‘o}na, M and Mantasa, JM and Castrob, MJ and Ortegab, S},
year={2012}
}
In this work we describe a GPU implementation of a first order two-layer Savage-Hutter type model introduced by E. D. Fernandez-Nieto et al in 2008 to simulate tsunamis generated by underwater landslides using the CUDA framework over structured meshes. We also describe an extension of this implementation which exploits the parallel power of a GPU cluster by using MPI and applying a non-standard row-block decomposition of the spatial domain. This distributed implementation which overlaps MPI communications with GPU computations and memory transfers between GPU and CPU is tested for several artificial and realistic problems using up to 24 GPUs. The influence of the amount of wet and dry zones on the obtained scalability is analyzed. Numerical experiments show that good weak and strong scalabilities are reached when all the submeshes have the same number of rows, and that the distribution of the wet and dry zones among the submeshes is an important factor which affects the obtained scalability.
October 26, 2013 by hgpu