Prediction of Performance and Power Consumption of GPGPU Applications
Birla Institute of Technology and Science, Pilani
arXiv:2305.01886 [cs.DC]
@misc{alavani2023prediction,
title={Prediction of Performance and Power Consumption of GPGPU Applications},
author={Gargi Alavani and Santonu Sarkar},
year={2023},
eprint={2305.01886},
archivePrefix={arXiv},
primaryClass={cs.DC}
}
Graphics Processing Units (GPUs) have become an integral part of High-Performance Computing to achieve an Exascale performance. The main goal of application developers of GPU is to tune their code extensively to obtain optimal performance, making efficient use of different resources available. While extracting optimal performance of applications on an HPC infrastructure, developers should also ensure the applications have the least energy usage considering the massive power consumption of data centres and HPC servers. This thesis presents two models developed which can be utilized by developers in analysing the CUDA kernel’s energy dissipation. The first one is a model that predicts the CUDA kernel’s execution time. Here a PTX code is statically analysed to extract instruction features, control flow, and data dependence. We propose two scheduling algorithm approaches that satisfy the performance and hardware constraints. The second model is a static analysis-based power prediction built by utilizing machine learning techniques. Features used for building the model are derived using static analysis of PTX code. These features are chosen to understand the relationship between GPU power consumption and program features that can aid developers in building energy-efficient, sustainable applications. The dataset used for validating both models include kernels from different benchmarks suits, sizes, nature (e.g., compute-bound, memory-bound), and complexity (e.g., control divergence, memory access patterns). We also present a tool that has practically validated the effectiveness and ease of using the two models as design assistance tools for GPU.
May 14, 2023 by hgpu