EvoEngineer: Mastering Automated CUDA Kernel Code Evolution with Large Language Models
Guo, Ping, Zhu, Chenyu, Chen, Siyuan, Liu, Fei, Lin, Xi, Lu, Zhichao, Zhang, Qingfu
–arXiv.org Artificial Intelligence
CUDA kernel optimization has become a critical bottleneck for AI performance, as deep learning training and inference efficiency directly depends on highly optimized GPU kernels. Despite the promise of Large Language Models (LLMs) for automating kernel optimization, this field suffers from a fragmented ecosystem of isolated and incomparable approaches with unclear problem formulations. Furthermore, general-purpose LLM code evolution methods cannot meet strict correctness requirements of CUDA kernel optimization. We address these fundamental challenges by first formalizing CUDA kernel optimization as a code optimization task with a clear objective, constraints, and evaluation metrics. We then establish the first systematic LLM-based code evolution framework, EvoEngineer, that provides guidance for designing and adapting optimization strategies to achieve a balance between performance and correctness. Finally, we implement a kernel optimization system based on this framework and conduct extensive experiments on 91 real-world CUDA kernels. Our results demonstrate that EvoEngineer achieves a principled balance between performance and correctness, with the highest averaged median speedup of 2.72 over baseline CUDA kernels and a code validity rate of 69.8%, outperforming existing methods on both dimensions. Our method achieves a maximum speedup of 36.75 among all operations over PyTorch kernels and delivers the highest speedup on 28 (56.0%) of 50 operations that achieve over 2 acceleration. CUDA kernel performance has become the critical bottleneck constraining the efficiency of AI training and inference. As foundation models continue scaling to unprecedented sizes (Guo et al., 2025; Jaech et al., 2024), computational demands necessitate maximum GPU utilization efficiency, where even marginal improvements in kernel performance can yield substantial reductions in computational costs. However, manual kernel optimization requires deep expertise across GPU architectures, memory hierarchies, parallelization patterns, and hardware-specific features (Navarro et al., 2020; Hennessy & Patterson, 2011), constituting a major obstacle to scaling AI systems. The kernel code optimization landscape presents extreme complexity, involving intricate tradeoffs between memory coalescing, thread divergence, occupancy optimization, and register usage (Ujald on, 2016; Huang et al., 2021; Zhao et al., 2022).
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- Asia
- China > Hong Kong (0.04)
- Middle East > Jordan (0.04)
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Technology: