19304

Sparse matrix partitioning for optimizing SpMV on CPU-GPU heterogeneous platforms

Akrem Benatia, Weixing Ji, Yizhuo Wang, Feng Shi
School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
The International Journal of High Performance Computing Applications, Vol. 34(1) 66–80, 2020

@article{benatia2020sparse,

   title={Sparse matrix partitioning for optimizing SpMV on CPU-GPU heterogeneous platforms},

   author={Benatia, Akrem and Ji, Weixing and Wang, Yizhuo and Shi, Feng},

   journal={The International Journal of High Performance Computing Applications},

   volume={34},

   number={1},

   pages={66–80},

   year={2020},

   publisher={SAGE Publications Sage UK: London, England}

}

Download Download (PDF)   View View   Source Source   

1331

views

Sparse matrix–vector multiplication (SpMV) kernel dominates the computing cost in numerous applications. Most of the existing studies dedicated to improving this kernel have been targeting just one type of processing units, mainly multicore CPUs or graphics processing units (GPUs), and have not explored the potential of the recent, rapidly emerging, CPU-GPU heterogeneous platforms. To take full advantage of these heterogeneous systems, the input sparse matrix has to be partitioned on different available processing units. The partitioning problem is more challenging with the existence of many sparse formats whose performances depend both on the sparsity of the input matrix and the used hardware. Thus, the best performance does not only depend on how to partition the input sparse matrix but also on which sparse format to use for each partition. To address this challenge, we propose in this article a new CPU-GPU heterogeneous method for computing the SpMV kernel that combines between different sparse formats to achieve better performance and better utilization of CPU-GPU heterogeneous platforms. The proposed solution horizontally partitions the input matrix into multiple block-rows and predicts their best sparse formats using machine learning-based performance models. A mapping algorithm is then used to assign the block-rows to the CPU and GPU(s) available in the system. Our experimental results using real-world large unstructured sparse matrices on two different machines show a noticeable performance improvement.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: