22163

GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks

Guyue Huang, Guohao Dai, Yu Wang, Huazhong Yang
Department of Electronic Engineering, Tsinghua University, Beijing, China
arXiv:2007.03179 [cs.DC], (7 Jul 2020)

@misc{huang2020gespmm,

   title={GE-SpMM: General-purpose Sparse Matrix-Matrix Multiplication on GPUs for Graph Neural Networks},

   author={Guyue Huang and Guohao Dai and Yu Wang and Huazhong Yang},

   year={2020},

   eprint={2007.03179},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Graph Neural Networks (GNNs) have achieved significant improvements in various domains. Sparse Matrix-Matrix multiplication (SpMM) is a fundamental operator in GNNs, which performs a multiplication between a sparse matrix and a dense matrix. Accelerating SpMM on parallel hardware like GPUs can face the following challenges: From the GNN application perspective, the compatibility needs to be considered. General GNN algorithms require SpMM-like operations (e.g., pooling) between matrices, which are not supported in current high-performance GPU libraries (e.g., Nvidia cuSPARSE). Moreover, the sophisticated preprocessing in previous implementations will lead to heavy data format conversion overheads in GNN frameworks. From the GPU hardware perspective, optimizations in SpMV (Sparse Matrix-Vector) designs on GPUs do not apply well to SpMM. SpMM exposes the column-wise parallelism in the dense output matrix, but straightforward generalization from SpMV leads to inefficient, uncoalesced access to sparse matrix in global memory. The sparse row data can be reused among GPU threads, which is neither possible in SpMM designs inherited from SpMV. To tackle these challenges, we propose GE-SpMM. GE-SpMM performs SpMM-like operation on sparse matrices represented in the most common Compressed Sparse Row (CSR) format, so it can be embedded in GNN frameworks with no preprocessing overheads and support general GNN algorithms. We introduce the Coalesced Row Caching method to process columns in parallel and ensure coalesced access to sparse matrix data. We also present the Coarse-grained Warp Merging to reduce redundant data loading among GPU warps. Experiments on a real-world graph dataset show that GE-SpMM achieves up to 1.41X speedup over Nvidia cuSPARSE and up to 1.81X over GraphBLAST. We also embed GE-SpMM in GNN frameworks and get up to 3.67X speedup over popular GNN models like GCN and GraphSAGE.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: