18960

IA-SpGEMM: An Input-aware Auto-tuning Framework for Parallel Sparse Matrix-Matrix Multiplication

Zhen Xie, Guangming Tan, Weifeng Liu, Ninghui Sun
State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences
33rd ACM International Conference on Supercomputing (ICS ’19), 2019

@inproceedings{xie2019ia,

   title={IA-SpGEMM: an input-aware auto-tuning framework for parallel sparse matrix-matrix multiplication},

   author={Xie, Zhen and Tan, Guangming and Liu, Weifeng and Sun, Ninghui},

   booktitle={Proceedings of the ACM International Conference on Supercomputing},

   pages={94–105},

   year={2019},

   organization={ACM}

}

Sparse matrix-matrix multiplication (SpGEMM) is a sparse kernel that is used in a number of scientific applications. Although several SpGEMM algorithms have been proposed, almost all of them are restricted to the compressed sparse row (CSR) format, and the possible performance gain from exploiting other formats has not been well studied. The particular format and algorithm that yield the best performance for SpGEMM also remain undetermined. In this work, we conduct a prospective study on format-specific parallel SpGEMM algorithms, and analyze their pros and cons. We then propose IA-SpGEMM, an input-aware auto-tuning Framework for SpGEMM, that provides a unified programming interface in the CSR format and automatically determines the best format and algorithm for arbitrary sparse matrices. For this purpose, we set-up an algorithm set and design a deep learning model called MatNet that is trained by over 2,700 matrices from the SuiteSparse Matrix Collection to quickly and accurately predict the best solution by using sparse features and density representations. We evaluate our framework on CPUs and a GPU, and the results show that IA-SpGEMM is on average 3.27x and 13.17x faster than MKL on an Intel and an AMD platform, respectively, and is 2.23x faster than cuSPARSE on an NVIDIA GPU.
Rating: 1.0/5. From 1 vote.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: