Performance Optimization of Deep Learning Sparse Matrix Kernels on Intel Max Series GPU
Old Dominion University, Norfolk, Virginia, USA
arXiv:2311.00368 [cs.LG], (1 Nov 2023)
@misc{zubair2023performance,
title={Performance Optimization of Deep Learning Sparse Matrix Kernels on Intel Max Series GPU},
author={Mohammad Zubair and Christoph Bauinger},
year={2023},
eprint={2311.00368},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
In this paper, we focus on three sparse matrix operations that are relevant for machine learning applications, namely, the sparse-dense matrix multiplication (SPMM), the sampled dense-dense matrix multiplication (SDDMM), and the composition of the SDDMM with SPMM, also termed as FusedMM. We develop optimized implementations for SPMM, SDDMM, and FusedMM operations utilizing Intel oneAPI’s Explicit SIMD (ESIMD) SYCL extension API. In contrast to CUDA or SYCL, the ESIMD API enables the writing of explicitly vectorized kernel code. Sparse matrix algorithms implemented with the ESIMD API achieved performance close to the peak of the targeted Intel Data Center GPU. We compare our performance results to Intel’s oneMKL library on Intel GPUs and to a recent CUDA implementation for the sparse matrix operations on NVIDIA’s V100 GPU and demonstrate that our implementations for sparse matrix operations outperform either.
November 5, 2023 by hgpu