29826

LLMPerf: GPU Performance Modeling meets Large Language Models

Khoi N.M. Nguyen, Hoang Duy Nguyen Do, Huyen Thao Le, Thanh Tuan Dao
FPT Software AI Center, Hanoi, Vietnam
arXiv:2503.11244 [cs.PF], (14 Mar 2025)
BibTeX

Performance modeling, a pivotal domain in program cost analysis, currently relies on manually crafted models constrained by various program and hardware limitations, especially in the intricate landscape of GPGPU. Meanwhile, Large Language Models (LLMs) have demonstrated their effectiveness in addressing diverse programming challenges. Our work establishes a connection between LLMs and performance modeling, employing the LLM as a performance estimator. Through experimental exploration with carefully designed large-scale OpenCL datasets, we highlight the potential capability as well as the main difficulties of using LLMs in handling performance modeling tasks for OpenCL device source programs. As the first study for this line of work, our LLM-based performance model achieves a mean absolute percentage error of 24.25% for a large-scale generated validation set. On a set of publicly available OpenCL programs, our model achieves a mean absolute percentage error of 46.1%.
No votes yet.
Please wait...

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us:

contact@hpgu.org