When to Reason: Semantic Router for vLLM
Wang, Chen, Liu, Xunzhuo, Liu, Yuhan, Zhu, Yue, Mo, Xiangxi, Jiang, Junchen, Chen, Huamin
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) demonstrate substantial accuracy gains when augmented with reasoning modes such as chain-of-thought and inference-time scaling. However, reasoning also incurs significant costs in inference latency and token usage, with environmental and financial impacts, which are unnecessary for many simple prompts. We present a semantic router that classifies queries based on their reasoning requirements and selectively applies reasoning only when beneficial. Our approach achieves a 10.2 percentage point improvement in accuracy on the MMLU-Pro benchmark while reducing response latency by 47.1% and token consumption by 48.5% compared to direct inference with vLLM. These results demonstrate that semantic routing offers an effective mechanism for striking a balance between accuracy and efficiency in open-source LLM serving systems
arXiv.org Artificial Intelligence
Oct-13-2025
- Country:
- North America > United States
- Illinois > Cook County
- Chicago (0.05)
- Massachusetts > Suffolk County
- Boston (0.04)
- Illinois > Cook County
- North America > United States
- Genre:
- Research Report > New Finding (0.67)
- Technology: