30719

Agentic Code Optimization via Compiler-LLM Cooperation

Benjamin Mikek, Danylo Vashchilenko, Bryan Lu, Panpan Xu
AWS AI, USA
arXiv:2604.04238 [cs.PL], (5 Apr 2026)

@misc{mikek2026agenticcodeoptimizationcompilerllm,

   title={Agentic Code Optimization via Compiler-LLM Cooperation},

   author={Benjamin Mikek and Danylo Vashchilenko and Bryan Lu and Panpan Xu},

   year={2026},

   eprint={2604.04238},

   archivePrefix={arXiv},

   primaryClass={cs.PL},

   url={https://arxiv.org/abs/2604.04238}

}

Generating performant executables from high level languages is critical to software performance across a wide range of domains. Modern compilers perform this task by passing code through a series of well-studied optimizations at progressively lower levels of abstraction, but may miss optimization opportunities that require high-level reasoning about a program’s purpose. Recent work has proposed using LLMs to fill this gap. While LLMs can achieve large speedups on some programs, they frequently generate code that is incorrect. In this work, we propose a method to balance the correctness of conventional compiler optimizations with the “creativity” of LLM-based code generation: compiler-LLM cooperation. Our approach integrates existing compiler optimization passes with LLM-based code generation at multiple levels of abstraction, retaining the best features of both types of code optimization. We realize our approach with a multi-agent system that includes (1) LLM-based optimization agents for each level of abstraction, (2) individual compiler constituents as tools, (3) an LLM-based test generation agent that probes the correctness and performance of generated code, and (4) a guiding LLM that orchestrates the other components. The strategy enables LLM-based optimization of input programs at multiple levels of abstraction and introduces a method for distributing computational budget between levels. Our extensive evaluation shows that compiler-LLM cooperation outperforms both existing compiler optimizations and level-specific LLM-based baselines, producing speedups up to 1.25x.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2026 hgpu.org

All rights belong to the respective authors

Contact us: