29285

Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems

Jianhua Gao, Weixing Ji, Yizhuo Wang
School of Artiicial Intelligence, Beijing Normal University, Beijing, China
ACM Transactions on Architecture and Code Optimization, 2024

@article{gao2024optimization,

   title={Optimization of Large-Scale Sparse Matrix-Vector Multiplication on Multi-GPU Systems},

   author={Gao, Jianhua and Ji, Weixing and Wang, Yizhuo},

   journal={ACM Transactions on Architecture and Code Optimization},

   year={2024},

   publisher={ACM New York, NY}

}

Download Download (PDF)   View View   Source Source   

773

views

Sparse matrix-vector multiplication (SpMV) is one of the important kernels of many iterative algorithms for solving sparse linear systems. The limited storage and computational resources of individual GPUs restrict both the scale and speed of SpMV computing in problem-solving. As real-world engineering problems continue to increase in complexity, the imperative for collaborative execution of iterative solving algorithms across multiple GPUs is increasingly apparent. Although the multi-GPU-based SpMV takes less kernel execution time, it also introduces additional data transmission overhead, which diminishes the performance gains derived from parallelization across multi-GPUs. Based on the non-zero elements distribution characteristics of sparse matrices and the trade-off between redundant computations and data transfer overhead, this paper introduces a series of SpMV optimization techniques tailored for multi-GPU environments and effectively enhances the execution efficiency of iterative algorithms on multiple GPUs. Firstly, we propose a two-level non-zero elements-based matrix partitioning method to increase the overlap of kernel execution and data transmission. Then, considering the irregular non-zero elements distribution in sparse matrices, a long-row-aware matrix partitioning method is proposed to hide more data transmissions. Finally, an optimization using redundant and inexpensive short-row execution to exchange costly data transmission is proposed. Our experimental evaluation demonstrates that, compared with the SpMV on a single GPU, the proposed method achieves an average speedup of 2.00x and 1.85x on platforms equipped with two RTX 3090 and two Tesla V100-SXM2, respectively. The average speedup of 2.65x is achieved on a platform equipped with four Tesla V100-SXM2.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: