11810

Implementing a Sparse Matrix Vector Product for the SELL-C/SELL-C-sigma formats on NVIDIA GPUs

Hartwig Anzt, Stanimire Tomov, Jack Dongarra
Innovative Computing Lab, University of Tennessee, Knoxville, USA
Innovative Computing Lab, University of Tennessee, Technical report ut-eecs-14-727, 2014

@article{anzt2014implementing,

   title={Implementing a Sparse Matrix Vector Product for the SELL-C/SELL-C-$sigma$ formats on NVIDIA GPUs},

   author={Anzt, Hartwig and Tomov, Stanimire and Dongarra, Jack},

   year={2014}

}

Download Download (PDF)   View View   Source Source   

917

views

Numerical methods in sparse linear algebra typically rely on a fast and efficient matrix vector product, as this usually is the backbone of iterative algorithms for solving eigenvalue problems or linear systems. Against the background of a large diversity in the characteristics of high performance computer architectures, it is a challenge to derive a cross-platform efficient storage format along with fast matrix vector kernels. Recently, attention focused on the SELL-C-sigma format, a sliced ELLPACK format enhanced by row-sorting to reduce the fill in when padding rows with zeros. In this paper we propose an additional modification resulting in the padded sliced ELLPACK (SELLP) format, for which we develop a sparse matrix vector CUDA kernel that is able to efficiently exploit the computing power of NVIDIA GPUs. We show that the kernel we developed outperforms straight-forward implementations for the widespread CSR and ELLPACK formats, and is highly competitive to the implementations in the highly optimized CUSPARSE library.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: