22070

Adaptive SpMV/SpMSpV on GPUs for Input Vectors of Varied Sparsity

Min Li, Yulong Ao, Chao Yang
Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
arXiv:2006.16767 [cs.DC], (30 Jun 2020)

@misc{li2020adaptive,

   title={Adaptive SpMV/SpMSpV on GPUs for Input Vectors of Varied Sparsity},

   author={Min Li and Yulong Ao and Chao Yang},

   year={2020},

   eprint={2006.16767},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Download Download (PDF)   View View   Source Source   

1379

views

Despite numerous efforts for optimizing the performance of Sparse Matrix and Vector Multiplication (SpMV) on modern hardware architectures, few works are done to its sparse counterpart, Sparse Matrix and Sparse Vector Multiplication (SpMSpV), not to mention dealing with input vectors of varied sparsity. The key challenge is that depending on the sparsity levels, distribution of data, and compute platform, the optimal solution can vary, and a static solution does not suffice. In this paper, we propose an adaptive SpMV/SpMSpV framework, which can automatically select the appropriate SpMV/SpMSpV kernel on GPUs for any sparse matrix and vector at the runtime. Based on systematic analysis on key factors such as computing pattern, workload distribution and write-back strategy, eight candidate SpMV/SpMSpV kernels are encapsulated into the framework to achieve high performance in a seamless manner. A comprehensive study on machine learning based kernel selector is performed to choose the kernel and adapt with the varieties of both the input and hardware from both accuracy and overhead perspectives. Experiments demonstrate that the adaptive framework can substantially outperform the previous state-of-the-art in real-world applications on NVIDIA Tesla K40m, P100 and V100 GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: