27500

Multi-GPU thermal lattice Boltzmann simulations using OpenACC and MPI

Ao Xu, Bo-Tao Li
School of Aeronautics, Northwestern Polytechnical University, Xi’an 710072, China
arXiv:2211.03160 [physics.flu-dyn], (6 Nov 2022)

@misc{https://doi.org/10.48550/arxiv.2211.03160,

   doi={10.48550/ARXIV.2211.03160},

   url={https://arxiv.org/abs/2211.03160},

   author={Xu, Ao and Li, Bo-Tao},

   keywords={Fluid Dynamics (physics.flu-dyn), Computational Engineering, Finance, and Science (cs.CE), FOS: Physical sciences, FOS: Physical sciences, FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={Multi-GPU thermal lattice Boltzmann simulations using OpenACC and MPI},

   publisher={arXiv},

   year={2022},

   copyright={arXiv.org perpetual, non-exclusive license}

}

Download Download (PDF)   View View   Source Source   

869

views

We assess the performance of the hybrid Open Accelerator (OpenACC) and Message Passing Interface (MPI) approach for multi-graphics processing units (GPUs) accelerated thermal lattice Boltzmann (LB) simulation. The OpenACC accelerates computation on a single GPU, and the MPI synchronizes the information between multiple GPUs. With a single GPU, the two-dimension (2D) simulation achieved 1.93 billion lattice updates per second (GLUPS) with a grid number of 8193^2, and the three-dimension (3D) simulation achieved 1.04 GLUPS with a grid number of 385^3, which is more than 76% of the theoretical maximum performance. On multi-GPUs, we adopt block partitioning, overlapping communications with computations, and concurrent computation to optimize parallel efficiency. We show that in the strong scaling test, using 16 GPUs, the 2D simulation achieved 30.42 GLUPS and the 3D simulation achieved 14.52 GLUPS. In the weak scaling test, the parallel efficiency remains above 99% up to 16 GPUs. Our results demonstrated that, with improved data and task management, the hybrid OpenACC and MPI technique is promising for thermal LB simulation on multi-GPUs.
Rating: 5.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: