Implementing Sparse Matrix-Vector multiplication using CUDA based on a hybrid sparse matrix format

Wei Cao, Lu Yao, Zongzhe Li, Yongxian Wang, Zhenghua Wang
Nat. Key Lab. for Parallel & Distrib. Process., Nat. Univ. of Defense Technol., Changsha, China
International Conference on Computer Application and System Modeling (ICCASM), 2010


   title={Implementing Sparse Matrix-Vector multiplication using CUDA based on a hybrid sparse matrix format},

   author={Cao, W. and Yao, L. and Li, Z. and Wang, Y. and Wang, Z.},

   booktitle={Computer Application and System Modeling (ICCASM), 2010 International Conference on},






Source Source   



The Sparse Matrix-Vector product (SpMV) is a key operation in engineering and scientific computing. Methods for efficiently implementing it in parallel are critical to the performance of many applications. Modern Graphics Processing Units (GPUs) coupled with the advent of general purpose programming environments like NVIDIA’s CUDA, have gained interest as a viable architecture for data-parallel general purpose computations. Currently, SpMV implementations using CUDA based on common sparse matrix format have already appeared. Among them, the performance of implementation based on ELLPACK-R format is the best. However, in this implementation, when the maximum number of nonzeros per row does substantially differ from the average, thread is suffering from load imbalance. This paper proposes a new matrix storage format called ELLPACK-RP, which combines ELLPACK-R format with JAD format, and implements the SpMV using CUDA based on it. The result proves that it can decrease the load imbalance and improve the SpMV performance efficiently.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: