23550

Hybrid MPI and CUDA Parallelization for CFD Applications on Multi-GPU HPC Clusters

Jianqi Lai, Hang Yu, Zhengyu Tian, Hua Li
College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China
Hindawi Scientific Programming, Volume 2020, Article ID 8862123, 15 pages, 2020

@article{lai2020hybrid,

   title={Hybrid MPI and CUDA Parallelization for CFD Applications on Multi-GPU HPC Clusters},

   author={Lai, Jianqi and Yu, Hang and Tian, Zhengyu and Li, Hua},

   year={2020}

}

Download Download (PDF)   View View   Source Source   

1320

views

Graphics processing units (GPUs) have a strong floating-point capability and a high memory bandwidth in data parallelism and have been widely used in high-performance computing (HPC). Compute unified device architecture (CUDA) is used as a parallel computing platform and programming model for the GPU to reduce the complexity of programming. The programmable GPUs are becoming popular in computational fluid dynamics (CFD) applications. In this work, we propose a hybrid parallel algorithm of the message passing interface and CUDA for CFD applications on multi-GPU HPC clusters. The AUSM + UP upwind scheme and the three-step Runge-Kutta method are used for spatial discretization and time discretization, respectively. The turbulent solution is solved by the K-w SST two-equation model. The CPU only manages the execution of the GPU and communication, and the GPU is responsible for data processing. Parallel execution and memory access optimizations are used to optimize the GPU-based CFD codes. We propose a nonblocking communication method to fully overlap GPU computing, CPU_CPU communication, and CPU_GPU data transfer by creating two CUDA streams. Furthermore, the one-dimensional domain decomposition method is used to balance the workload among GPUs. Finally, we evaluate the hybrid parallel algorithm with the compressible turbulent flow over a flat plate. The performance of a single GPU implementation and the scalability of multi-GPU clusters are discussed. Performance measurements show that multi-GPU parallelization can achieve a speedup of more than 36 times with respect to CPU-based parallel computing, and the parallel algorithm has good scalability.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: