3539

GPU acceleration of the dynamics routine in the HIRLAM weather forecast model

Van Thieu Vu, Gerard Cats, Lex Wolters
Leiden University, 2333 CA Leiden, The Netherlands
International Conference on High Performance Computing and Simulation (HPCS), 2010

@conference{vu2010gpu,

   title={GPU acceleration of the dynamics routine in the HIRLAM weather forecast model},

   author={Vu, VT and Cats, G. and Wolters, L.},

   booktitle={High Performance Computing and Simulation (HPCS), 2010 International Conference on},

   pages={31–38},

   year={2010},

   organization={IEEE}

}

Download Download (PDF)   View View   Source Source   

528

views

Programmable graphics processing units (GPUs) nowadays offer very high performance computing power at relatively low hardware cost and power consumption. In this paper, we present the implementation of the dynamics routine of the HIRLAM weather forecast model on the NVIDIA GeForce 9800 GX2 GPU card using the Compute Unified Device Architecture (CUDA) as parallel programming model. We converted the original Fortran to C and CUDA by hand, straightforwardly, without much concern about optimization. On a single GPU, we observe speed-ups by an order of magnitude over our hosting CPU (Intel quad core, 1998 MHz). This includes the relatively very costly copying of data between GPU and CPU memories. Calculation times proper decreased by a factor of 2000. A single GPU, however, has not enough memory for practical use. Therefore, we investigated a parallel implementation on 4 GPUs. We found a parallel speed-up of 3.6, which is not very promising if memory limitations force the use of many GPUs in parallel. We discuss several options to solve this issue.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: