Automatic Selection of Sparse Matrix Representation on GPUs
The Ohio State University, Columbus, OH, USA
Proceedings of the 29th ACM on International Conference on Supercomputing (ICS ’15), 2015
@inproceedings{sedaghati2015automatic,
title={Automatic Selection of Sparse Matrix Representation on GPUs},
author={Sedaghati, Naser and Mu, Te and Pouchet, Louis-Noel and Parthasarathy, Srinivasan and Sadayappan, P},
booktitle={Proceedings of the 29th ACM on International Conference on Supercomputing},
pages={99–108},
year={2015},
organization={ACM}
}
Sparse matrix-vector multiplication (SpMV) is a core kernel in numerous applications, ranging from physics simulation and large-scale solvers to data analytics. Many GPU implementations of SpMV have been proposed, targeting several sparse representations and aiming at maximizing overall performance. No single sparse matrix representation is uniformly superior, and the best performing representation varies for sparse matrices with different sparsity patterns. In this paper, we study the inter-relation between GPU architecture, sparse matrix representation and the sparse dataset. We perform extensive characterization of pertinent sparsity features of around 700 sparse matrices, and their SpMV performance with a number of sparse representations implemented in the NVIDIA CUSP and cuSPARSE libraries. We then build a decision model using machine learning to automatically select the best representation to use for a given sparse matrix on a given target platform, based on the sparse matrix features. Experimental results on three GPUs demonstrate that the approach is very effective in selecting the best representation.
June 14, 2015 by hgpu