CuBridge: An LLM-Based Framework for Understanding and Reconstructing High-Performance Attention Kernels
Shanghai Jiao Tong University
arXiv:2605.05023 [cs.LG], (6 May 2026)
@misc{ma2026cubridge,
title={CuBridge: An LLM-Based Framework for Understanding and Reconstructing High-Performance Attention Kernels},
author={Xing Ma and Yangjie Zhou and Wu Sun and Zihan Liu and Jingwen Leng and Yun Lin and Shixuan Sun and Minyi Guo and Jin Song Dong},
year={2026},
eprint={2605.05023},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2605.05023}
}
Efficient CUDA implementations of attention mechanisms are critical to modern deep learning systems, yet supporting diverse and evolving attention variants remains challenging. Existing frameworks and compilers trade performance for flexibility, while expert-written kernels achieve high efficiency but are difficult to adapt. Recent work explores large language models (LLMs) for GPU kernel generation, but prior studies report unstable correctness and significant performance gaps for complex operators such as attention. We present CuBridge, an LLM-based framework that adapts expert-written attention kernels through a structured lift-transfer-lower workflow. CuBridge starts from expert-written CUDA attention kernels and lifts them into an executable intermediate representation that makes execution orchestration explicit while abstracting low-level CUDA syntax. Given a user-provided PyTorch specification, CuBridge generates and verifies a target IR program, then reconstructs optimized CUDA code via reference-guided lowering. Across diverse attention variants and GPU platforms, CuBridge consistently produces correct kernels and substantially outperforms general frameworks, compiler-based approaches, and prior LLM-based methods.
May 11, 2026 by hgpu
Your response
You must be logged in to post a comment.




