Block: Balancing Load in LLM Serving with Context, Knowledge and Predictive Scheduling
Da, Wei, Kalyvianaki, Evangelia
–arXiv.org Artificial Intelligence
This paper presents Block, a distributed scheduling framework designed to optimize load balancing and auto-provisioning across instances in large language model serving frameworks by leveraging contextual information from incoming requests. Unlike popular model serving systems that rely on monolithic and heuristic task schedulers, Block operates as a fully distributed, stateless, and predictive scheduling system to achieve low overhead, reliability, and scalability. It leverages the deterministic and predictable characteristics of LLM inferences, such as host configurations, response lengths, and hardware performance, to make scheduling decisions based on accurately predicted metrics. Evaluation on a 12 GPUs cluster shows that Block significantly outperforms heuristic schedulers, boosting serving capacity by up to 16.7\% and reducing P99 tail latency by up to 49.5\%. These performance gains remain consistent across diverse models, workloads and configurations. Code and data are open-sourced.
arXiv.org Artificial Intelligence
Aug-14-2025
- Country:
- Africa > Zambia
- Southern Province > Choma (0.04)
- Asia
- Japan > Honshū
- Chūbu > Toyama Prefecture > Toyama (0.04)
- Middle East
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Singapore (0.04)
- Japan > Honshū
- Europe
- Germany (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Monaco (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (1.00)
- North America
- Mexico > Gulf of Mexico (0.04)
- United States
- California
- San Diego County > Carlsbad (0.04)
- Santa Clara County > Palo Alto (0.04)
- Santa Cruz County > Santa Cruz (0.04)
- New York > New York County
- New York City (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Texas > Dallas County
- Dallas (0.04)
- Utah (0.04)
- California
- Africa > Zambia
- Genre:
- Research Report (0.84)
- Technology: