Efficient Sparse Matrix-Vector Multiplication on GPUs using the CSR Storage Format
AMD Research, Advanced Micro Devices, Inc., USA
International Conference for High Performance Computing, Networking, Storage and Analysis (SC14), 2014
@article{daga2014efficient,
title={Efficient Sparse Matrix-Vector Multiplication on GPUs using the CSR Storage Format},
author={Daga, Joseph L Greathouse Mayank},
year={2014}
}
The performance of sparse matrix vector multiplication (SpMV) is important to computational scientists. Compressed sparse row (CSR) is the most frequently used format to store sparse matrices. However, CSR-based SpMV on graphics processing units (GPUs) has poor performance due to irregular memory access patterns, load imbalance, and reduced parallelism. This has led researchers to propose new storage formats. Unfortunately, dynamically transforming CSR into these formats has significant runtime and storage overheads. We propose a novel algorithm, CSR-Adaptive, which keeps the CSR format intact and maps well to GPUs. Our implementation addresses the aforementioned challenges by (i) efficiently accessing DRAM by streaming data into the local scratchpad memory and (ii) dynamically assigning different numbers of rows to each parallel GPU compute unit. CSR-Adaptive achieves an average speedup of 14.7x over existing CSR-based algorithms and 2.3x over clSpMV cocktail, which uses an assortment of matrix formats.
November 4, 2015 by hgpu