A Performance Modeling and Optimization Analysis Tool for Sparse Matrix-Vector Multiplication on GPUs

Ping Guo, Liqiang Wang, Po Chen
Department of Computer Science, University of Wyoming, Laramie, WY, 82071
IEEE Transactions on Parallel and Distributed Systems, 2012


   title={A Performance Modeling and Optimization Analysis Tool for Sparse Matrix-Vector Multiplication on GPUs},

   author={Guo, Ping and Wang, Liqiang and Chen, Po},




Download Download (PDF)   View View   Source Source   



This paper presents a performance modeling and optimization analysis tool to predict and optimize the performance of sparse matrix-vector multiplication (SpMV) on GPUs. We make the following contributions: (1) We present an integrated analytical and profile-based performance modeling to accurately predict the kernel execution times of CSR, ELL, COO, and HYB SpMV kernels. Our proposed approach is general, and neither limited by GPU programming languages nor restricted to specific GPU architectures. In this paper, we use CUDA-based SpMV kernels and NVIDIA Tesla C2050 for our performance modeling and experiments. According to our experiments, for 77 out of 82 test cases, the performance differences between the predicted and measured execution times are less than 9%; for the rest 5 test cases, the differences are between 9% and 10%. For CSR, ELL, COO, and HYB SpMV CUDA kernels, the average differences are 6.3%, 4.4%, 2.2%, and 4.7%, respectively. (2) Based on the performance modeling, we design a dynamic-programming based SpMV optimal solution auto-selection algorithm to automatically report an optimal solution (i.e., optimal storage strategy, storage format(s), and execution time) for a target sparse matrix. In our experiments, the average performance improvements of the optimal solutions are 41.1%, 49.8%, and 37.9%, compared to NVIDIA’s CSR, COO, and HYB CUDA kernels, respectively.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2021 hgpu.org

All rights belong to the respective authors

Contact us: