The Anatomy of a Triton Attention Kernel
Ringlein, Burkhard, van Lunteren, Jan, Stoica, Radu, Parnell, Thomas
–arXiv.org Artificial Intelligence
A long-standing goal in both industry and academia is to develop an LLM inference platform that is portable across hardware architectures, eliminates the need for low-level hand-tuning, and still delivers best-in-class efficiency. In this work, we demonstrate that portable, efficient cross-platform LLM inference is indeed possible and share our experience. We develop a state-of-the-art paged attention kernel, the core performance-critical component of many LLM deployments, that builds exclusively on the domain-specific just-in-time compiled language Triton to achieve state-of-the-art performance on both NVIDIA and AMD GPUs. We describe our high-level approach, the key algorithmic and system-level improvements, the parameter auto-tuning required to unlock efficiency, and the integrations into a popular inference server that are necessary to bring the performance of a generic Triton attention kernel from 19.7% of the state-of-the-art to 105.9%. Our results highlight how open-source domain-specific languages can be leveraged to unlock model portability across different GPU vendors.
arXiv.org Artificial Intelligence
Nov-18-2025
- Country:
- Europe
- Sweden > Vaestra Goetaland
- Gothenburg (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- Sweden > Vaestra Goetaland
- North America > United States
- Arizona > Maricopa County > Phoenix (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Hardware (0.35)
- Technology: