Sandwich: Separating Prefill-Decode Compilation for Efficient CPU LLM Serving
Zhao, Juntao, Li, Jiuru, Wu, Chuan
–arXiv.org Artificial Intelligence
Utilizing CPUs to serve large language models (LLMs) is a resource-friendly alternative to GPU serving. Existing CPU-based solutions ignore workload differences between the prefill and the decode phases of LLM inference, applying a static per-NUMA (Non-Uniform Memory Access) node model partition and utilizing vendor libraries for operator-level execution, which is suboptimal. We propose Sandwich, a hardware-centric CPU-based LLM serving engine that uses different execution plans for the prefill and decode phases and optimizes them separately. We evaluate Sandwich across diverse baselines and datasets on five CPU platforms, including x86 with AVX-2 and AVX-512, as well as ARM with NEON. Sandwich achieves an average 2.01x throughput improvement and 90% satisfactory time-to-first-token (TTFT) and time-per-output-token (TPOT) latencies with up to 3.40x lower requirements in single sequence serving, and significant improvement in Goodput in continuous-batching serving. The GEMM kernels generated by Sandwich outperform representative vendor kernels and other dynamic shape solutions, achieving performance comparable to static compilers with three orders of magnitude less kernel tuning costs.
arXiv.org Artificial Intelligence
Jul-25-2025
- Country:
- Asia
- Europe
- France > Nouvelle-Aquitaine
- Germany (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- United Kingdom > Scotland
- City of Edinburgh > Edinburgh (0.04)
- North America
- Canada > British Columbia
- United States
- California
- San Diego County
- Santa Clara County
- Santa Clara (0.04)
- Stanford (0.04)
- Delaware > New Castle County
- Newark (0.04)
- District of Columbia > Washington (0.05)
- New York > New York County
- New York City (0.05)
- Texas
- Dallas County > Dallas (0.04)
- Harris County > Houston (0.04)
- Washington > King County
- Redmond (0.04)
- California
- Genre:
- Research Report > New Finding (0.46)
- Workflow (0.88)
- Industry:
- Telecommunications (0.34)
- Technology: