GPGPU Performance and Power Estimation Using Machine Learning
Electrical and Computer Engineering, The University of Texas at Austin
21st IEEE International Symposium on High Performance Architecture, 2015
@article{wu2015gpgpu,
title={GPGPU Performance and Power Estimation Using Machine Learning},
author={Wu, Gene and Greathouse, Joseph L and Lyashevsky, Alexander and Jayasena, Nuwan and Chiou, Derek},
year={2015}
}
Graphics Processing Units (GPUs) have numerous configuration and design options, including core frequency, number of parallel compute units (CUs), and available memory bandwidth. At many stages of the design process, it is important to estimate how application performance and power are impacted by these options. This paper describes a GPU performance and power estimation model that uses machine learning techniques on measurements from real GPU hardware. The model is trained on a collection of applications that are run at numerous different hardware configurations. From the measured performance and power data, the model learns how applications scale as the GPU’s configuration is changed. Hardware performance counter values are then gathered when running a new application on a single GPU configuration. These dynamic counter values are fed into a neural network that predicts which scaling curve from the training data best represents this kernel. This scaling curve is then used to estimate the performance and power of the new application at different GPU configurations. Over an 8x range of the number of CUs, a 3.3x range of core frequencies, and a 2.9x range of memory bandwidth, our model’s performance and power estimates are accurate to within 15% and 10% of real hardware, respectively. This is comparable to the accuracy of cycle-level simulators. However, after an initial training phase, our model runs as fast as, or faster than the program running natively on real hardware.
March 12, 2015 by hgpu