Performance Engineering of the Kernel Polynomial Method on Large-Scale CPU-GPU Systems

Moritz Kreutzer, Georg Hager, Gerhard Wellein, Andreas Pieper, Andreas Alvermann, Holger Fehske
Erlangen Regional Computing Center, Friedrich-Alexander University of Erlangen-Nuremberg, Erlangen, Germany
arXiv:1410.5242 [cs.CE], (20 Oct 2014)


   author={Kreutzer}, M. and {Hager}, G. and {Wellein}, G. and {Pieper}, A. and {Alvermann}, A. and {Fehske}, H.},

   title={"{Performance Engineering of the Kernel Polynomial Method on Large-Scale CPU-GPU Systems}"},

   journal={ArXiv e-prints},




   keywords={Computer Science – Computational Engineering, Finance, and Science, Condensed Matter – Mesoscale and Nanoscale Physics, Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Performance, Physics – Computational Physics},




   adsnote={Provided by the SAO/NASA Astrophysics Data System}


Download Download (PDF)   View View   Source Source   



The Kernel Polynomial Method (KPM) is a well-established scheme in quantum physics and quantum chemistry to determine the eigenvalue density and spectral properties of large sparse matrices. In this work we demonstrate the high optimization potential and feasibility of peta-scale heterogeneous CPU-GPU implementations of the KPM. At the node level we show that it is possible to decouple the sparse matrix problem posed by KPM from main memory bandwidth both on CPU and GPU. To alleviate the effects of scattered data access we combine loosely coupled outer iterations with tightly coupled block sparse matrix multiple vector operations, which enables pure data streaming. All optimizations are guided by a performance analysis and modelling process that indicates how the computational bottlenecks change with each optimization step. Finally we use the optimized node-level KPM with a hybrid-parallel framework to perform large scale heterogeneous electronic structure calculations for novel topological materials on a petascale-class Cray XC30 system.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2018 hgpu.org

All rights belong to the respective authors

Contact us: