16591

Efficient CSR-Based Sparse Matrix-Vector Multiplication on GPU

Jiaquan Gao, Panpan Qi, Guixia He
School of Computer Science and Technology, Nanjing Normal University, Nanjing 210097, China
Mathematical Problems in Engineering, 2016

@article{gaoa2016efficient,

   title={Efficient CSR-Based Sparse Matrix-Vector Multiplication on GPU},

   author={Gaoa, Jiaquan and Qib, Panpan and Hec, Guixia},

   year={2016}

}

Download Download (PDF)   View View   Source Source   

1663

views

Sparse matrix-vector multiplication (SpMV) is an important operation in computational science, and needs be accelerated because it often represents the dominant cost in many widely-used iterative methods and eigenvalue problems. We achieve this objective by proposing a novel SpMV algorithm based on the compressed sparse row (CSR) on the GPU. Our method dynamically assigns different numbers of rows to each thread block, and executes different optimization implementations on the basis of the number of rows it involves for each block. The process of accesses to the CSR arrays is fully coalesced, and the GPU’s DRAM bandwidth is efficiently utilized by loading data into the shared memory, which alleviates the bottleneck of many existing CSR-based algorithms (i.e., CSR-scalar and CSR-vector). Test results on C2050 and K20c GPUs show that our method outperforms a perfect-CSR algorithm that inspires our work (up to 1.76x on average on C2050, and up to 1.79x on average on K20c), the vendor tuned CUSPARSE V6.5 (up to 2.76x on average on C2050, and up to 3.52x on average on K20c) and CUSP V0.5.1 (up to 1.43x on average on C2050, and up to 1.75x on average on K20c), and three popular algorithms clSpMV (up to 1.55x on average on C2050, and up to 1.56x on average on K20c), CSR5 (up to 1.21x on average on C2050, and up to 1.30x on average on K20c), and CSR-Adaptive (up to 1.02x on average on C2050, and up to 1.07x on average on K20c).
Rating: 1.5/5. From 2 votes.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: