28718

Performance Optimization of Deep Learning Sparse Matrix Kernels on Intel Max Series GPU

Mohammad Zubair, Christoph Bauinger
Old Dominion University, Norfolk, Virginia, USA
arXiv:2311.00368 [cs.LG], (1 Nov 2023)

@misc{zubair2023performance,

   title={Performance Optimization of Deep Learning Sparse Matrix Kernels on Intel Max Series GPU},

   author={Mohammad Zubair and Christoph Bauinger},

   year={2023},

   eprint={2311.00368},

   archivePrefix={arXiv},

   primaryClass={cs.LG}

}

Download Download (PDF)   View View   Source Source   

573

views

In this paper, we focus on three sparse matrix operations that are relevant for machine learning applications, namely, the sparse-dense matrix multiplication (SPMM), the sampled dense-dense matrix multiplication (SDDMM), and the composition of the SDDMM with SPMM, also termed as FusedMM. We develop optimized implementations for SPMM, SDDMM, and FusedMM operations utilizing Intel oneAPI’s Explicit SIMD (ESIMD) SYCL extension API. In contrast to CUDA or SYCL, the ESIMD API enables the writing of explicitly vectorized kernel code. Sparse matrix algorithms implemented with the ESIMD API achieved performance close to the peak of the targeted Intel Data Center GPU. We compare our performance results to Intel’s oneMKL library on Intel GPUs and to a recent CUDA implementation for the sparse matrix operations on NVIDIA’s V100 GPU and demonstrate that our implementations for sparse matrix operations outperform either.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: