Sleep-time Compute: Beyond Inference Scaling at Test-time
Lin, Kevin, Snell, Charlie, Wang, Yu, Packer, Charles, Wooders, Sarah, Stoica, Ion, Gonzalez, Joseph E.
–arXiv.org Artificial Intelligence
Scaling test-time compute has emerged as a key ingredient for enabling large language models (LLMs) to solve difficult problems, but comes with high latency and inference cost. We introduce sleep-time compute, which allows models to "think" offline about contexts before queries are presented: by anticipating what queries users might ask and pre-computing useful quantities, we can significantly reduce the compute requirements at test-time. To demonstrate the efficacy of our method, we create modified versions of two reasoning tasks - Stateful GSM-Symbolic and Stateful AIME. We find that sleep-time compute can reduce the amount of test-time compute needed to achieve the same accuracy by ~ 5x on Stateful GSM-Symbolic and Stateful AIME and that by scaling sleep-time compute we can further increase accuracy by up to 13% on Stateful GSM-Symbolic and 18% on Stateful AIME. Furthermore, we introduce Multi-Query GSM-Symbolic, which extends GSM-Symbolic by including multiple related queries per context. By amortizing sleep-time compute across related queries about the same context using Multi-Query GSM-Symbolic, we can decrease the average cost per query by 2.5x. We then conduct additional analysis to understand when sleep-time compute is most effective, finding the predictability of the user query to be well correlated with the efficacy of sleep-time compute. Finally, we conduct a case-study of applying sleep-time compute to a realistic agentic SWE task.
arXiv.org Artificial Intelligence
Apr-18-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California > Alameda County > Berkeley (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.65)
- Technology: