18405

GPU parallelization of a hybrid pseudospectral fluid turbulence framework using CUDA

Duane Rosenberg, Pablo D. Mininni, Raghu Reddy, Annick Pouquet
National Center for Atmospheric Research, Boulder
arXiv:1808.01309 [physics.comp-ph], (3 Aug 2018)

@article{rosenberg2018parallelization,

   title={GPU parallelization of a hybrid pseudospectral fluid turbulence framework using CUDA},

   author={Rosenberg, Duane and Mininni, Pablo D. and Reddy, Raghu and Pouquet, Annick},

   year={2018},

   month={aug},

   archivePrefix={"arXiv"},

   primaryClass={physics.comp-ph}

}

Download Download (PDF)   View View   Source Source   

1677

views

An existing hybrid MPI-OpenMP scheme is augmented with a CUDA-based fine grain parallelization approach for multidimensional distributed Fourier transforms, in a well-characterized pseudospectral fluid turbulence code. Basics of the hybrid scheme are reviewed, and heuristics provided to show a potential benefit of the CUDA implementation. The method draws heavily on the CUDA runtime library to handle memory management, and on the cuFFT library for computing local FFTs. The manner in which the interfaces are constructed to these libraries, and ISO bindings utilized to facilitate platform portability, are discussed. CUDA streams are implemented to overlap data transfer with cuFFT computation. Testing with a baseline solver demonstrates significant aggregate speed-up over the hybrid MPI-OpenMP solver by offloading to GPUs on an NVLink-based test system. While the batch streamed approach provides little benefit with NVLink, we see a performance gain of 30% when tuned for the optimal number of streams on a PCIe-based system. It is found that strong GPU scaling is ideal, or slightly better than ideal, in all cases. In addition to speed-up measurements for the fiducial solver, we also consider several other solvers with different numbers of transform operations and find that aggregate speed-ups are nearly constant for all solvers.
Rating: 2.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: