10469

Acceleration of iterative Navier-Stokes solvers on graphics processing units

Tomczak, T., Zadarnowska, K., Koza, Z., Matyka, M., Miroslaw, L.
Wroclaw University of Technology
International Journal of Computational Fluid Dynamics

@article{doi:10.1080/10618562.2013.804178,

   author={Tomczak, Tadeusz and Zadarnowska, Katarzyna and Koza, Zbigniew and Matyka, Maciej and MirosÅ‚aw, Łukasz},

   title={Acceleration of iterative Navier-Stokes solvers on graphics processing units},

   journal={International Journal of Computational Fluid Dynamics},

   volume={27},

   number={4-5},

   pages={201-209},

   year={2013},

   doi={10.1080/10618562.2013.804178},

   URL={http://www.tandfonline.com/doi/abs/10.1080/10618562.2013.804178},

   eprint={http://www.tandfonline.com/doi/pdf/10.1080/10618562.2013.804178}

}

Download Download (PDF)   View View   Source Source   

1329

views

While new power-efficient computer architectures exhibit spectacular theoretical peak performance, they require specific conditions to operate efficiently, which makes porting complex algorithms a challenge. Here, we report results of the semi-implicit method for pressure linked equations (SIMPLE) and the pressure implicit with operator splitting (PISO) methods implemented on the graphics processing unit (GPU). We examine the advantages and disadvantages of the full porting over a partial acceleration of these algorithms run on unstructured meshes. We found that the full-port strategy requires adjusting the internal data structures to the new hardware and proposed a convenient format for storing internal data structures on GPUs. Our implementation is validated on standard steady and unsteady problems and its computational efficiency is checked by comparing its results and run times with those of some standard software (OpenFOAM) run on central processing unit (CPU). The results show that a server-class GPU outperforms a server-class dual-socket multi-core CPU system running essentially the same algorithm by up to a factor of 4.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: