SLOs-Serve: Optimized Serving of Multi-SLO LLMs
Chen, Siyuan, Jia, Zhipeng, Khan, Samira, Krishnamurthy, Arvind, Gibbons, Phillip B.
–arXiv.org Artificial Intelligence
This paper introduces SLOs-Serve, a system designed for serving multi-stage large language model (LLM) requests with application- and stage-specific service level objectives (SLOs). The key idea behind SLOs-Serve is to customize the allocation of tokens to meet these SLO requirements. SLOs-Serve uses a multi-SLO dynamic programming-based algorithm to continuously optimize token allocations under SLO constraints by exploring the full design space of chunked prefill and (optional) speculative decoding. Leveraging this resource planning algorithm, SLOs-Serve effectively supports multi-SLOs and multi-replica serving with dynamic request routing while being resilient to bursty arrivals. Our evaluation across 6 LLM application scenarios (including summarization, coding, chatbot, tool calling, and reasoning) demonstrates that SLOs-Serve improves per-GPU serving capacity by 2.2x on average compared to prior state-of-the-art systems.
arXiv.org Artificial Intelligence
Apr-15-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Netherlands
- South Holland > Rotterdam (0.04)
- North America > United States
- California > San Diego County
- La Jolla (0.04)
- New York > New York County
- New York City (0.05)
- Pennsylvania > Allegheny County
- Pittsburgh (0.14)
- Texas > Travis County
- Austin (0.04)
- Washington > King County
- Seattle (0.04)
- California > San Diego County
- Asia > Middle East
- Genre:
- Research Report (0.82)
- Technology: