Retrieval-of-Thought: Efficient Reasoning via Reusing Thoughts
Ahmed, Ammar, Khan, Azal Ahmad, Ahmad, Ayaan, Di, Sheng, Liu, Zirui, Anwar, Ali
–arXiv.org Artificial Intelligence
Large reasoning models improve accuracy by producing long reasoning traces, but this inflates latency and cost, motivating inference-time efficiency. We propose Retrieval-of-Thought (RoT), which reuses prior reasoning as composable "thought" steps to guide new problems. RoT organizes steps into a thought graph with sequential and semantic edges to enable fast retrieval and flexible recombination. At inference, RoT retrieves query-relevant nodes and applies reward-guided traversal to assemble a problem-specific template that guides generation. This dynamic template reuse reduces redundant exploration and, therefore, reduces output tokens while preserving accuracy. We evaluate RoT on reasoning benchmarks with multiple models, measuring accuracy, token usage, latency, and memory overhead. Findings show small prompt growth but substantial efficiency gains, with RoT reducing output tokens by up to 40%, inference latency by 82%, and cost by 59% while maintaining accuracy. RoT establishes a scalable paradigm for efficient LRM reasoning via dynamic template construction through retrieval. Large Reasoning Models (LRMs) have demonstrated impressive capabilities in solving complex tasks by producing outputs accompanied by detailed reasoning trajectories (Xu et al., 2025a). These models adopt an intentionally slower and more deliberative inference process, mimicking human-like reasoning. This approach typically involves generating longer outputs and consuming increased inference-time compute to effectively address reasoning-intensive queries. Recent efforts to improve reasoning in LLMs have primarily focused on generating more output tokens to simulate thoughtful, multi-step reasoning (Snell et al., 2024). A common approach involves guiding generation using external reward models Zhang et al. (2024). These include outcome-based reward models, such as Best-of-N (BoN) sampling.
arXiv.org Artificial Intelligence
Sep-29-2025
- Country:
- Asia > North Korea
- Hwanghae-namdo > Haeju (0.04)
- North America > United States
- California > Santa Cruz County
- Santa Cruz (0.04)
- Minnesota (0.04)
- California > Santa Cruz County
- Asia > North Korea
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Information Technology (0.46)
- Technology: