26131

Dynamic GPU Energy Optimization for Machine Learning Training Workloads

Farui Wang, Weizhe Zhang, Shichao Lai, Meng Hao, Zheng Wang
School of Cyberspace Science at Harbin Institute of Technology, Harbin 150000, China
arXiv:2201.01684 [cs.DC], (5 Jan 2022)

@article{2021,

   title={Dynamic GPU Energy Optimization for Machine Learning Training Workloads},

   ISSN={2161-9883},

   url={http://dx.doi.org/10.1109/TPDS.2021.3137867},

   DOI={10.1109/tpds.2021.3137867},

   journal={IEEE Transactions on Parallel and Distributed Systems},

   publisher={Institute of Electrical and Electronics Engineers (IEEE)},

   author={Wang, Farui and Zhang, Weizhe and Lai, Shichao and Hao, Meng and Wang, Zheng},

   year={2021}

}

GPUs are widely used to accelerate the training of machine learning workloads. As modern machine learning models become increasingly larger, they require a longer time to train, leading to higher GPU energy consumption. This paper presents GPOEO, an online GPU energy optimization framework for machine learning training workloads. GPOEO dynamically determines the optimal energy configuration by employing novel techniques for online measurement, multi-objective prediction modeling, and search optimization. To characterize the target workload behavior, GPOEO utilizes GPU performance counters. To reduce the performance counter profiling overhead, it uses an analytical model to detect the training iteration change and only collects performance counter data when an iteration shift is detected. GPOEO employs multi-objective models based on gradient boosting and a local search algorithm to find a trade-off between execution time and energy consumption. We evaluate the GPOEO by applying it to 71 machine learning workloads from two AI benchmark suites running on an NVIDIA RTX3080Ti GPU. Compared with the NVIDIA default scheduling strategy, GPOEO delivers a mean energy saving of 16.2% with a modest average execution time increase of 5.1%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: