30371

The Anatomy of a Triton Attention Kernel

Burkhard Ringlein, Jan van Lunteren, Radu Stoica, Thomas Parnell
IBM Research, Zurich, Switzerland
arXiv:2511.11581 [cs.LG], (7 Oct 2025)

@misc{ringlein2025anatomytritonattentionkernel,

   title={The Anatomy of a Triton Attention Kernel},

   author={Burkhard Ringlein and Jan van Lunteren and Radu Stoica and Thomas Parnell},

   year={2025},

   eprint={2511.11581},

   archivePrefix={arXiv},

   primaryClass={cs.LG},

   url={https://arxiv.org/abs/2511.11581}

}

Download Download (PDF)   View View   Source Source   

780

views

A long-standing goal in both industry and academia is to develop an LLM inference platform that is portable across hardware architectures, eliminates the need for low-level hand-tuning, and still delivers best-in-class efficiency. In this work, we demonstrate that portable, efficient cross-platform LLM inference is indeed possible and share our experience. We develop a state-of-the-art paged attention kernel, the core performance-critical component of many LLM deployments, that builds exclusively on the domain-specific just-in-time compiled language Triton to achieve state-of-the-art performance on both NVIDIA and AMD GPUs. We describe our high-level approach, the key algorithmic and system-level improvements, the parameter auto-tuning required to unlock efficiency, and the integrations into a popular inference server that are necessary to bring the performance of a generic Triton attention kernel from 19.7% of the state-of-the-art to 105.9%. Our results highlight how open-source domain-specific languages can be leveraged to unlock model portability across different GPU vendors.
No votes yet.
Please wait...

You must be logged in to post a comment.

* * *

* * *

HGPU group © 2010-2025 hgpu.org

All rights belong to the respective authors

Contact us: