DuetServe: Harmonizing Prefill and Decode for LLM Serving via Adaptive GPU Multiplexing
Gao, Lei, Jiang, Chaoyi, Zarch, Hossein Entezari, Wong, Daniel, Annavaram, Murali
–arXiv.org Artificial Intelligence
Modern LLM serving systems must sustain high throughput while meeting strict latency SLOs across two distinct inference phases: compute-intensive prefill and memory-bound decode phases. Existing approaches either (1) aggregate both phases on shared GPUs, leading to interference between prefill and decode phases, which degrades time-between-tokens (TBT); or (2) disaggregate the two phases across GPUs, improving latency but wasting resources through duplicated models and KV cache transfers. We present DuetServe, a unified LLM serving framework that achieves disaggregation-level isolation within a single GPU. DuetServe operates in aggregated mode by default and dynamically activates SM-level GPU spatial multiplexing when TBT degradation is predicted. Its key idea is to decouple prefill and decode execution only when needed through fine-grained, adaptive SM partitioning that provides phase isolation only when contention threatens latency service level objectives (SLOs). DuetServe integrates (1) an attention-aware roofline model to forecast iteration latency, (2) a partitioning optimizer that selects the optimal SM split to maximize throughput under TBT constraints, and (3) an interruption-free execution engine that eliminates CPU-GPU synchronization overhead. Evaluations show that DuetServe improves total throughput by up to 1.3x while maintaining low generation latency compared to state-of-the-art frameworks.
arXiv.org Artificial Intelligence
Nov-10-2025
- Country:
- North America > United States > California
- Los Angeles County > Los Angeles (0.28)
- Riverside County > Riverside (0.14)
- North America > United States > California
- Genre:
- Research Report (0.64)
- Technology: