Library Liberation: Competitive Performance Matmul Through Compiler-composed Nanokernels
Thangamani, Arun, Shahid, Md Asghar Ahmad, Siemieniuk, Adam, Morel, Rolf, Golin, Renato, Heinecke, Alexander
–arXiv.org Artificial Intelligence
The rapidly evolving landscape of AI and machine learning workloads has widened the gap between high-level domain operations and efficient hardware utilization. Achieving near-peak performance still demands deep hardware expertise-experts either handcraft target-specific kernels (e.g., DeepSeek) or rely on specialized libraries (e.g., CUTLASS)-both of which add complexity and limit scalability for most ML practitioners. This paper introduces a compilation scheme that automatically generates scalable, high-performance microkernels by leveraging the MLIR dialects to bridge domain-level operations and processor capabilities. Our approach removes dependence on low-level libraries by enabling the compiler to auto-generate near-optimal code directly. At its core is a mechanism for composing nanokernels from low-level IR constructs with near-optimal register utilization, forming efficient microkernels tailored to each target. We implement this technique in an MLIR-based compiler supporting both vector and tile based CPU instructions. Experiments show that the generated nanokernels are of production-quality, and competitive with state-of-the-art microkernel libraries.
arXiv.org Artificial Intelligence
Nov-19-2025
- Country:
- Asia > India (0.04)
- Europe > Switzerland (0.04)
- North America > United States
- Arizona > Maricopa County
- Phoenix (0.04)
- California > Santa Clara County
- Palo Alto (0.04)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York > New York County
- New York City (0.04)
- Utah > Salt Lake County
- Salt Lake City (0.04)
- Arizona > Maricopa County
- Genre:
- Research Report (0.50)
- Technology: