28375

DGEMM on Integer Matrix Multiplication Unit

Hiroyuki Ootomo, Katsuhisa Ozaki, Rio Yokota
Tokyo Institute of Technology, Tokyo, Japan
arXiv:2306.11975 [cs.DC], (22 Jun 2023)

@misc{ootomo2023dgemm,

   title={DGEMM on Integer Matrix Multiplication Unit},

   author={Hiroyuki Ootomo and Katsuhisa Ozaki and Rio Yokota},

   year={2023},

   eprint={2306.11975},

   archivePrefix={arXiv},

   primaryClass={cs.DC}

}

Deep learning hardware achieves high throughput and low power consumption by reducing computing precision and specializing in matrix multiplication. For machine learning inference, fixed-point value computation is commonplace, where the input and output values and the model parameters are quantized. Thus, many processors are now equipped with fast integer matrix multiplication units (IMMU). It is of significant interest to find a way to harness these IMMUs to improve the performance of HPC applications while maintaining accuracy. We focus on the Ozaki scheme, which computes a high-precision matrix multiplication by using lower-precision computing units, and show the advantages and disadvantages of using IMMU. The experiment using integer Tensor Cores shows that we can compute double-precision matrix multiplication faster than cuBLAS and an existing Ozaki scheme implementation on FP16 Tensor Cores on NVIDIA consumer GPUs. Furthermore, we demonstrate accelerating a quantum circuit simulation by up to 4.33 while maintaining the FP64 accuracy.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: