6704

Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation

Moritz Kreutzer, Georg Hager, Gerhard Wellein, Holger Fehske, Achim Basermann, Alan R. Bishop
Erlangen Regional Computing Center, Erlangen, Germany
arXiv:1112.5588v1 [cs.DC] (23 Dec 2011)

@article{2011arXiv1112.5588K,

   author={Kreutzer}, M. and {Hager}, G. and {Wellein}, G. and {Fehske}, H. and {Basermann}, A. and {Bishop}, A.~R.},

   title={"{Sparse matrix-vector multiplication on GPGPU clusters: A new storage format and a scalable implementation}"},

   journal={ArXiv e-prints},

   archivePrefix={"arXiv"},

   eprint={1112.5588},

   primaryClass={"cs.DC"},

   keywords={Computer Science – Distributed, Parallel, and Cluster Computing, Computer Science – Performance},

   year={2011},

   month={dec},

   adsurl={http://adsabs.harvard.edu/abs/2011arXiv1112.5588K},

   adsnote={Provided by the SAO/NASA Astrophysics Data System}

}

Download Download (PDF)   View View   Source Source   

2218

views

Sparse matrix-vector multiplication (spMVM) is the dominant operation in many sparse solvers. We investigate performance properties of spMVM with matrices of various sparsity patterns on the nVidia "Fermi" class of GPGPUs. A new "padded jagged diagonals storage" (pJDS) format is proposed which may substantially reduce the memory overhead intrinsic to the widespread ELLPACK-R scheme. In our test scenarios the pJDS format cuts the overall spMVM memory footprint on the GPGPU by up to 70%, and achieves 95% to 130% of the ELLPACK-R performance. Using a suitable performance model we identify performance bottlenecks on the node level that invalidate some types of matrix structures for efficient multi-GPGPU parallelization. For appropriate sparsity patterns we extend previous work on distributed-memory parallel spMVM to demonstrate a scalable hybrid MPI-GPGPU code, achieving efficient overlap of communication and computation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: