The sparse matrix vector product on GPUs
Dpt Computer Architecture and Electronics, University of Almeria
Proceedings of the 2009 International Conference on Computational and Mathematical Methods in Science and Engineering
@conference{vazquez2009sparse,
title={The sparse matrix vector product on GPUs},
author={Vazquez, F. and Garzon, EM and Martinez, JA and Fernandez, JJ},
booktitle={Proceedings of the 2009 International Conference on Computational and Mathematical Methods in Science and Engineering},
volume={2},
pages={1081–1092}
}
The sparse matrix vector product (SpMV) is a paramount operation in engineering and scientific computing and, hence, has been a subject of intense research for long. The irregular computations involved in SpMV make its optimization challenging. Therefore, enormous effort has been devoted to devise data formats to store the sparse matrix with the ultimate aim of maximizing the performance. The Graphics Processing Units (GPUs) have recently emerged as platforms that yield outstanding acceleration factors. Currently, SpMV implementations for NVIDIA-GPUs have already appeared on the scene. This work proposes and evaluates a new implementation of SpMV for GPUs based on a new matrix storage format, called ELLPACK-R, and compares it against a variety of formats proposed elsewhere. The most important qualities of this new format is that (1) no preprocessing of the sparse matrix is required, and (2) the resulting SpMV algorithm is very regular. The comparative evaluation of this new SpMV approach has been carried out based on a representative set of test matrices. The results show that the SpMV approach based on ELLPACK-R turns out to be superior to the previous strategies used so far. Moreover, a comparison with standard state-of-the-art superscalar processors reveals that significant speedup factors are achieved with GPUs.
March 3, 2011 by hgpu