15640

Dynamic Sparse-Matrix Allocation on GPUs

James King, Thomas Gilray, Robert M. Kirby, Matthew Might
University of Utah
International Supercomputing Conference (ISC), 2016

@article{king2016dynamic,

   title={Dynamic Sparse-Matrix Allocation on GPUs},

   author={King, James and Gilray, Thomas and Kirby, Robert M and Might, Matthew},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1826

views

Sparse matrices are a core component in many numerical simulations, and their efficiency is essential to achieving high performance. Dynamic sparse-matrix allocation (insertion) can benefit a number of problems such as sparse-matrix factorization, sparse-matrix-matrix addition, static analysis (e.g. points-to analysis), computing transitive closure, and other graph algorithms. Existing sparse-matrix formats are poorly designed to handle dynamic updates. The compressed sparse-row (CSR) format is fully compact and must be rebuilt after each new entry. Ellpack (ELL) stores a constant number of entries per row which allows for efficient insertion and sparse matrix-vector multiplication (SpMV), but is memory inefficient and strictly limits row size. The coordinate (COO) format stores a list of entries and is efficient for both memory use and insertion time; however, it is much less efficient at SpMV. Hybrid ellpack (HYB) compromises by using a combination of ELL and COO, but degrades in performance as the COO portion fills up. Rows which may use the COO portion require it to be completely traversed during every SpMV operation. In this paper we take a new approach, introducing dynamic compressed sparse row (DCSR) as a sparse-matrix format that permits efficient dynamic updates. These updates are significantly faster than those made to a HYB matrix while maintaining SpMV times comparable to CSR. We demonstrate the efficacy of our dynamic allocation scheme, evaluating updates and SpMV operations on adjacency matrices of sparse-graph benchmarks on the GPU.
Rating: 2.5/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: