New Sparse Matrix Storage Format to Improve The Performance of Total SPMV Time

Neelima Reddy, Raghavendra Prakash, Ram Mohana Reddy
Department of Information Technology, National Institute of Technology, Karnataka, India
Scalable Computing: Practice and Experience, Volume 13, Number 2, pp. 159-171, 2012


   title={New Sparse Matrix Storage Format to Improve The Performance of Total SPMV Time},

   author={Reddy, N. and Prakash, R. and Reddy, R.M.},

   journal={Scalable Computing: Practice and Experience},





Download Download (PDF)   View View   Source Source   



Graphics Processing Units (GPUs) are massive data parallel processors. High performance comes only at the cost of identifying data parallelism in the applications while using data parallel processors like GPU. This is an easy effort for applications that have regular memory access and high computation intensity. GPUs are equally attractive for sparse matrix vector multiplications (SPMV for short) that have irregular memory access. SPMV is an important computation in most of the scientific and engineering applications and scaling the performance, bandwidth utilization and compute intensity (ratio of computation to the data access) of SPMV computation is a priority in both academia and industry. There are various data structures and access patterns proposed for sparse matrix representation on GPUs and optimizations and improvements on these data structures is a continuous effort. This paper proposes a new format for the sparse matrix representation that reduces the data organization time and the memory transfer time from CPU to GPU for the memory bound SPMV computation. The BLSI (Bit Level Single Indexing) sparse matrix representation is up to 204% faster than COO (Co-ordinate), 104% faster than CSR (Compressed Sparse Row) and 217% faster than HYB (Hybrid) formats in memory transfer time from CPU to GPU. The proposed sparse matrix format is implemented in CUDA-C on CUDA (Compute Unified Device Architecture) supported NVIDIA graphics cards.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2017 hgpu.org

All rights belong to the respective authors

Contact us: