16789

A Comparison of GPU Execution Time Prediction using Machine Learning and Analytical Modeling

Marcos Amaris, Raphael Y. de Camargo, Mohamed Dyab, Alfredo Goldman, Denis Trystram
Institute of Mathematics and Statistics, University of Sao Paulo, Sao Paulo, Brazil
15th IEEE International Symposium on Network Computing and Applications, 2016

@article{amaris2016comparison,

   title={A Comparison of GPU Execution Time Prediction using Machine Learning and Analytical Modeling},

   author={Amar{i}s, Marcos and de Camargo, Raphael Y and Dyab, Mohamed and Goldman, Alfredo and Trystram, Denis},

   year={2016}

}

Today, most high-performance computing (HPC) platforms have heterogeneous hardware resources (CPUs, GPUs, storage, etc.) A Graphics Processing Unit (GPU) is a parallel computing coprocessor specialized in accelerating vector operations. The prediction of application execution times over these devices is a great challenge and is essential for efficient job scheduling. There are different approaches to do this, such as analytical modeling and machine learning techniques. Analytic predictive models are useful, but require manual inclusion of interactions between architecture and software, and may not capture the complex interactions in GPU architectures. Machine learning techniques can learn to capture these interactions without manual intervention, but may require large training sets. In this paper, we compare three different machine learning approaches: linear regression, support vector machines and random forests with a BSP-based analytical model, to predict the execution time of GPU applications. As input to the machine learning algorithms, we use profiling information from 9 applications executed over 9 different GPUs. We show that machine learning approaches provide reasonable predictions for different cases. Although the predictions were inferior to the analytical model, they required no detailed knowledge of application code, hardware characteristics or explicit modeling. Consequently, whenever a database with profile information is available or can be generated, machine learning techniques can be useful for deploying automated on-line performance prediction for scheduling applications on heterogeneous architectures containing GPUs.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2024 hgpu.org

All rights belong to the respective authors

Contact us: