Finite element assembly strategies on multi-and many-core architectures

G. R. Markall, A. Slemmer, D. A .Ham, P. H. J. Kelly, C. D. Cantwell, S. J. Sherwin
Department of Computing, Imperial College London
International Journal for Numerical Methods in Fluids, 2011


   title={Finite element assembly strategies on multi-and many-core architectures},

   author={Markall, GR and Slemmer, A. and Ham, DA and Kelly, PHJ and Cantwell, CD and Sherwin, SJ},



Download Download (PDF)   View View   Source Source   



We demonstrate that radically differing implementations of finite element methods are needed on multicore (CPU) and many-core (GPU) architectures, if their respective performance potential is to be realised. Our experimental investigations using a finite element advection-diffusion solver show that increased performance on each architecture can only be achieved by committing to specific and diverse algorithmic choices that cut across the high-level structure of the implementation. Making these commitments to achieve high performance for a single architecture leads to a loss of performance portability. Data structures that include redundant data but enable coalesced memory accesses are faster on many-core architectures, whereas redundancy-free data structures that are accessed indirectly are faster on multi-core architectures. The Addto algorithm for global assembly is optimal on multi-core architectures, whereas the Local Matrix Approach is optimal on many-core architectures despite requiring more computation than the Addto algorithm. These results demonstrate the value in making the correct choice of algorithm and data structure when implementing finite element methods, spectral element methods and low-order discontinuous Galerkin methods on modern high-performance architectures.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: