Three storage formats for sparse matrices on GPGPUs
Dipartimento di Ingegneria Civile e Ingegneria Informatica, Universita di Roma "Tor Vergata", Roma, Italy
Universita di Roma "Tor Vergata", 2015
@techreport{barbieri2015three,
title={Three storage formats for sparse matrices on GPGPUs},
author={Barbieri, Davide and Cardellini, Valeria and Fanfarillo, Alessandro and Filippone, Salvatore},
year={2015},
institution={Tech. Rep. DICII RR-15.6, Universita di Roma Tor Vergata (February 2015)}
}
The multiplication of a sparse matrix by a dense vector is a centerpiece of scientific computing applications: it is the essential kernel for the solution of sparse linear systems and sparse eigenvalue problems by iterative methods. The efficient implementation of the sparse matrixvector multiplication is therefore crucial and has been the subject of an immense amount of research, with interest renewed with every major new trend in high performance computing architectures. The introduction of General Purpose Graphics Programming Units (GPGPUs) is no exception, and many articles have been devoted to this problem. In this report we propose three novel matrix formats, ELL-G and HLL which derive from ELL, and HDIA for matrices having mostly a diagonal sparsity pattern. We compare the performance of the proposed formats to that of state-of-the-art formats (i.e., HYB and ELLRT) with experiments run on different GPU platforms and test matrices coming from various application domains.
September 9, 2015 by hgpu