Review of Inference-Time Scaling Strategies: Reasoning, Search and RAG
Wang, Zhichao, Wan, Cheng, Nie, Dong
–arXiv.org Artificial Intelligence
The performance gains of LLMs have historically been driven by scaling up model size and training data. However, the rapidly diminishing availability of high-quality training data is introducing a fundamental bottleneck, shifting the focus of research toward inference-time scaling. This paradigm uses additional computation at the time of deployment to substantially improve LLM performance on downstream tasks without costly model re-training. This review systematically surveys the diverse techniques contributing to this new era of inference-time scaling, organizing the rapidly evolving field into two comprehensive perspectives: Output-focused and Input-focused methods. Output-focused techniques encompass complex, multi-step generation strategies, including reasoning (e.g., CoT, ToT, ReAct), various search and decoding methods (e.g., MCTS, beam search), training for long CoT (e.g., RLVR, GRPO), and model ensemble methods. Input-focused techniques are primarily categorized by few-shot and RAG, with RAG as the central focus. The RAG section is further detailed through a structured examination of query expansion, data, retrieval and reranker, LLM generation methods, and multi-modal RAG.
arXiv.org Artificial Intelligence
Oct-14-2025
- Country:
- Asia
- Middle East > Jordan (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- Europe
- Denmark > Capital Region
- Copenhagen (0.04)
- Italy > Calabria
- Catanzaro Province > Catanzaro (0.04)
- Monaco (0.04)
- Netherlands > South Holland
- Leiden (0.04)
- Switzerland (0.04)
- United Kingdom (0.04)
- Denmark > Capital Region
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Virginia (0.04)
- Canada > Ontario
- Asia
- Genre:
- Overview (1.00)
- Research Report > New Finding (0.46)
- Workflow (1.00)
- Technology: