The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study
Department of Computer Science, Hong Kong Baptist University, Hong Kong
arXiv:1905.11012 [cs.PF], (27 May 2019)
@misc{tang2019impact,
title={The Impact of GPU DVFS on the Energy and Performance of Deep Learning: an Empirical Study},
author={Zhenheng Tang and Yuxin Wang and Qiang Wang and Xiaowen Chu},
year={2019},
eprint={1905.11012},
archivePrefix={arXiv},
primaryClass={cs.PF}
}
Over the past years, great progress has been made in improving the computing power of general-purpose graphics processing units (GPGPUs), which facilitates the prosperity of deep neural networks (DNNs) in multiple fields like computer vision and natural language processing. A typical DNN training process repeatedly updates tens of millions of parameters, which not only requires huge computing resources but also consumes significant energy. In order to train DNNs in a more energy-efficient way, we empirically investigate the impact of GPU Dynamic Voltage and Frequency Scaling (DVFS) on the energy consumption and performance of deep learning. Our experiments cover a wide range of GPU architectures, DVFS settings, and DNN configurations. We observe that, compared to the default core frequency settings of three tested GPUs, the optimal core frequency can help conserve 8.7%∼23.1% energy consumption for different DNN training cases. Regarding the inference, the benefits vary from 19.6%∼26.4%. Our findings suggest that GPU DVFS has great potentials to help develop energy efficient DNN training/inference schemes.
May 30, 2019 by hgpu