1366

Nodal Discontinuous Galerkin Methods on Graphics Processors

Andreas Klockner, Tim Warburton, Jeffrey Bridge, Jan S. Hesthaven
Division of Applied Mathematics, Brown University, Providence, RI 02912, United States
Journal of Computational Physics, Volume 228, Issue 21, 20 November 2009, Pages 7863-7882, arXiv:0901.1024v3 [math.NA] (3 Apr 2009)

@article{klockner2009nodal,

   title={Nodal discontinuous Galerkin methods on graphics processors},

   author={Kl{\”o}ckner, A. and Warburton, T. and Bridge, J. and Hesthaven, J.S.},

   journal={Journal of Computational Physics},

   volume={228},

   number={21},

   pages={7863–7882},

   issn={0021-9991},

   year={2009},

   publisher={Elsevier}

}

Download Download (PDF)   View View   Source Source   

827

views

Discontinuous Galerkin (DG) methods for the numerical solution of partial differential equations have enjoyed considerable success because they are both flexible and robust: They allow arbitrary unstructured geometries and easy control of accuracy without compromising simulation stability. Lately, another property of DG has been growing in importance: The majority of a DG operator is applied in an element-local way, with weak penalty-based element-to-element coupling. The resulting locality in memory access is one of the factors that enables DG to run on off-the-shelf, massively parallel graphics processors (GPUs). In addition, DG’s high-order nature lets it require fewer data points per represented wavelength and hence fewer memory accesses, in exchange for higher arithmetic intensity. Both of these factors work significantly in favor of a GPU implementation of DG. Using a single US$400 Nvidia GTX 280 GPU, we accelerate a solver for Maxwell’s equations on a general 3D unstructured grid by a factor of 40 to 60 relative to a serial computation on a current-generation CPU. In many cases, our algorithms exhibit full use of the device’s available memory bandwidth. Example computations achieve and surpass 200 gigaflops/s of net application-level floating point work. In this article, we describe and derive the techniques used to reach this level of performance. In addition, we present comprehensive data on the accuracy and runtime behavior of the method.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: