24777

Accelerating Sparse Approximate Matrix Multiplication on GPUs

Xiaoyan Liu, Yi Liu, Ming Dun, Bohong Yin, Hailong Yang, Zhongzhi Luan, Depei Qian
School of Computer Science and Engineering, Beihang University, Beijing, China
arXiv:2103.13042 [cs.PF], (24 Mar 2021)

@misc{liu2021accelerating,

   title={Accelerating Sparse Approximate Matrix Multiplication on GPUs},

   author={Xiaoyan Liu and Yi Liu and Ming Dun and Bohong Yin and Hailong Yang and Zhongzhi Luan and Depei Qian},

   year={2021},

   eprint={2103.13042},

   archivePrefix={arXiv},

   primaryClass={cs.PF}

}

Download Download (PDF)   View View   Source Source   

1485

views

Although the matrix multiplication plays a vital role in computational linear algebra, there are few efficient solutions for matrix multiplication of the near-sparse matrices. The Sparse Approximate Matrix Multiply (SpAMM) is one of the algorithms to fill the performance gap neglected by traditional optimizations for dense/sparse matrix multiplication. However, existing SpAMM algorithms fail to exploit the performance potential of GPUs for acceleration. In this paper, we present cuSpAMM, the first parallel SpAMM algorithm optimized for multiple GPUs. Several performance optimizations have been proposed, including algorithm re-design to adapt to the thread parallelism, blocking strategies for memory access optimization, and the acceleration with the tensor core. In addition, we scale cuSpAMM to run on multiple GPUs with an effective load balance scheme. We evaluate cuSpAMM on both synthesized and real-world datasets on multiple GPUs. The experiment results show that cuSpAMM achieves significant performance speedup compared to vendor optimized cuBLAS and cuSPARSE libraries.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: