Speed, power and cost implications for GPU acceleration of Computational Fluid Dynamics on HPC systems

Zachary Cooper-Baldock, Brenda Vara Almirall, Kiao Inthavong
National Computational Infrastructure, Australian National University, Canberra, Australia
arXiv:2404.02482 [cs.DC], (3 Apr 2024)


   title={Speed, power and cost implications for GPU acceleration of Computational Fluid Dynamics on HPC systems},

   author={Zachary Cooper-Baldock and Brenda Vara Almirall and Kiao Inthavong},






Download Download (PDF)   View View   Source Source   



Computational Fluid Dynamics (CFD) is the simulation of fluid flow undertaken with the use of computational hardware. The underlying equations are computationally challenging to solve and necessitate high performance computing (HPC) to resolve in a practical timeframe when a reasonable level of fidelity is required. The simulations are memory intensive, having previously been limited to central processing unit (CPU) solvers, as graphics processing unit (GPU) video random access memory (VRAM) was insufficient. However, with recent developments in GPU design and increases to VRAM, GPU acceleration of CPU solved workflows is now possible. At HPC scale however, many operational details are still unknown. This paper utilizes ANSYS Fluent, a leading commercial code in CFD, to investigate the compute speed, power consumption and service unit (SU) cost considerations for the GPU acceleration of CFD workflows on HPC architectures. To provide a comprehensive analysis, different CPU architectures, and GPUs have been assessed. It is seen that GPU compute speed is faster, however, the initialisation speed, power and cost performance is less clear cut. Whilst the larger A100 cards perform well with respect to power consumption, this is not observed for the V100 cards. In situations where more than one GPU is required, their adoption may not be beneficial from a power or cost perspective.
No votes yet.
Please wait...

Recent source codes

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: