29951

A Novel Compiler Transformation for Fast Sparse Matrix Multiplication in GPUs

Hossein Albakri, Kazem Cheshmi
McMaster University, Hamilton, Ontario, Canada
arXiv:2506.15174 [cs.PL], (18 Jun 2025)
BibTeX

Download Download (PDF)   View View   Source Source   

329

views

Sparse data structures are commonly used in neural networks to reduce the memory footprint. These data structures are compact but cause irregularities such as random memory accesses, which prevent efficient use of the memory hierarchy. GPUs are a common platform for machine learning practitioners, but running compact data structures on these devices often leads to slow-downs due to inefficient use of computing and memory resources. This paper proposes a new compiler transformation, enumerate-and-sparse-coarsen, that accelerates sparse matrix-matrix multiplication (SPMM) on GPU devices. The transformation increases data reuse in registers and caches while creating more balanced workloads for GPU computing resources. The transformation is tested on sparse neural networks in convolutional and transformer models. On an A100 GPU and across a columns of matrix B (bCols) in A x B = C from range of 32 to 128, the transformation yields a geometric mean speedup of 1.84 to 2.27 compared to cuBLAS and cuSPARSE baselines, respectively.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org