26890

Dissecting Tensor Cores via Microbenchmarks: Latency, Throughput and Numerical Behaviors

Wei Sun, Ang Li, Tong Geng, Sander Stuijk, Henk Corporaal
Electronic System Group, Eindhoven University of Technology, the Netherlands
arXiv:2206.02874 [cs.AR], (6 Jun 2022)

@misc{https://doi.org/10.48550/arxiv.2206.02852,

   doi={10.48550/ARXIV.2206.02852},

   url={https://arxiv.org/abs/2206.02852},

   author={Almatary, Hesham and Dodson, Michael and Clarke, Jessica and Rugg, Peter and Gomes, Ivan and Podhradsky, Michal and Neumann, Peter G. and Moore, Simon W. and Watson, Robert N. M.},

   keywords={Cryptography and Security (cs.CR), FOS: Computer and information sciences, FOS: Computer and information sciences},

   title={CompartOS: CHERI Compartmentalization for Embedded Systems},

   publisher={arXiv},

   year={2022},

   copyright={Creative Commons Attribution 4.0 International}

}

Download Download (PDF)   View View   Source Source   

836

views

Tensor Cores have been an important unit to accelerate Fused Matrix Multiplication Accumulation (MMA) in all NVIDIA GPUs since Volta Architecture. To program Tensor Cores, users have to use either legacy wmma APIs or current mma APIs. Legacy wmma APIs are more easy-to-use but can only exploit limited features and power of Tensor Cores. Specifically, wmma APIs support fewer operand shapes and can not leverage the new sparse matrix multiplication feature of the newest Ampere Tensor Cores. However, the performance of current programming interface has not been well explored. Furthermore, the computation numeric behaviors of low-precision floating points (TF32, BF16 and FP16) supported by newest Ampere Tensor Cores are also mysterious. In this paper, we explore the throughput and latency of current programming APIs. We also intuitively study the numeric behaviors of Tensor Cores MMA and profile the intermediate operations including multiplication, addition of inner product and addition of accumulation.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: