25764

Mixed precision in Graphics Processing Unit

Quentin Gallouédec
École Centrale de Lyon
arXiv:2110.12794 [cs.AR], (25 Oct 2021)

@misc{gallouédec2021mixed,

   title={Mixed precision in Graphics Processing Unit},

   author={Quentin Gallouédec},

   year={2021},

   eprint={2110.12794},

   archivePrefix={arXiv},

   primaryClass={cs.AR}

}

Download Download (PDF)   View View   Source Source   

1150

views

Modern graphics computing units (GPUs) are designed and optimized to perform highly parallel numerical calculations. This parallelism has enabled (and promises) significant advantages, both in terms of energy performance and calculation. In this document, we take stock of the different applications of mixed precision. We recall the standards currently used in the overwhelming majority of systems in terms of numerical computation. We show that the mixed precision which decreases the precision at the input of an operation does not necessarily decrease the precision of its output. We show that this previous principle allows its transposition into one of the branches that most needs computing power: machine learning. The use of fixed point numbers and half-precision are two very effective ways to increase the learning ability of complex neural networks. Mixed precision still requires the use of suitable hardware, failing which the calculation time could on the contrary be lengthened. The NVIDIA Tensor Core that is found among others in their Tesla V100 range, is an example of implementation at the hardware level of mixed precision. On the other hand, by abandoning the traditional von Neumann model, mixed precision can also be transposed to a lower level of abstraction, using phase change memories.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: