High-performance sparse matrix-vector multiplication on GPUs for structured grid computations
Department of Computer Science and Engineering, The Ohio State University, Columbus, OH 43210
Proceedings of the 5th Annual Workshop on General Purpose Processing with Graphics Processing Units (GPGPU-5), 2012
@inproceedings{godwin2012high,
title={High-performance sparse matrix-vector multiplication on GPUs for structured grid computations},
author={Godwin, J. and Holewinski, J. and Sadayappan, P.},
booktitle={Proceedings of the 5th Annual Workshop on General Purpose Processing with Graphics Processing Units},
pages={47–56},
year={2012},
organization={ACM}
}
In this paper, we address efficient sparse matrix-vector multiplication for matrices arising from structured grid problems with high degrees of freedom at each grid node. Sparse matrix-vector multiplication is a critical step in the iterative solution of sparse linear systems of equations arising in the solution of partial differential equations using uniform grids for discretization. With uniform grids, the resulting linear system Ax = b has a matrix A that is sparse with a very regular structure. The specific focus of this paper is on sparse matrices that have a block structure due to the large number of unknowns at each grid point. Sparse matrix storage formats such as Compressed Sparse Row (CSR) and Diagonal format (DIA) are not the most effective for such matrices. In this work, we present a new sparse matrix storage format that takes advantage of the diagonal structure of matrices for stencil operations on structured grids. Unlike other formats such as the Diagonal storage format (DIA), we specifically optimize for the case of higher degrees of freedom, where formats such as DIA are forced to explicitly represent many zero elements in the sparse matrix. We develop efficient sparse matrix-vector multiplication for structured grid computations on GPU architectures using CUDA [25].
April 25, 2012 by hgpu