30794

KEET: Explaining Performance of GPU Kernels Using LLM Agents

Joshua H. Davis, Klaudiusz Rydzy, Srinivasan Ramesh, Aadit Nilay, Daniel Nichols, Swapna Raj, Nikhil Jain, Abhinav Bhatele
Department of Computer Science, University of Maryland, College Park, MD, USA
arXiv:2605.04467 [cs.PF], (6 May 2026)

@misc{davis2026keet,

   title={KEET: Explaining Performance of GPU Kernels Using LLM Agents},

   author={Joshua H. Davis and Klaudiusz Rydzy and Srinivasan Ramesh and Aadit Nilay and Daniel Nichols and Swapna Raj and Nikhil Jain and Abhinav Bhatele},

   year={2026},

   eprint={2605.04467},

   archivePrefix={arXiv},

   primaryClass={cs.PF},

   url={https://arxiv.org/abs/2605.04467}

}

Download Download (PDF)   View View   Source Source   

385

views

Performance profiles of GPU kernels generated by tools such as Nsight Compute are rich in detail but are often challenging to interpret. To achieve the best performance possible on a given GPU architecture, kernel developers need to spend significant time analyzing and comparing profiles in the tool’s graphical interface to identify and understand kernel performance bottlenecks. Large Language Models (LLMs) have shown promise in understanding complex data and generating natural language explanations. In this paper, we propose the Kernel Execution Explanation Toolkit (KEET), an LLM-based agentic framework for interpreting Nsight Compute profiles to generate useful and data-grounded natural language explanations of performance issues in GPU kernels, and suggestions for optimizations. We evaluate KEET using several CUDA kernels of varying complexity on NVIDIA H100 GPUs. We find that the generated explanations, when provided as context, improve the quality of LLM code optimization and multiple-choice question answering in downstream tasks. We further demonstrate that the tool can be used to interpret performance data from large sets of profiles to improve the quality of optimization suggestions.
No votes yet.
Please wait...

You must be logged in to post a comment.

Recent source codes

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: