On the Role of Temperature Sampling in Test-Time Scaling

Wu, Yuheng, Mirhoseini, Azalia, Tambe, Thierry

arXiv.org Artificial Intelligence 

Large language models (LLMs) can improve reasoning at inference time through test-time scaling (TTS), where multiple reasoning traces are generated and the best one is selected. Prior work shows that increasing the number of samples K steadily improves accuracy. In this paper, we demonstrate that this trend does not hold indefinitely: at large K, further scaling yields no gains, and certain hard questions remain unsolved regardless of the number of traces. Interestingly, we find that different sampling temperatures solve different subsets of problems, implying that single-temperature scaling explores only part of a model's potential. We therefore propose scaling along the temperature dimension, which enlarges the reasoning boundary of LLMs. Temperature scaling also enables base models to reach performance comparable to reinforcement learning (RL)-trained counterparts, without additional post-training. We further provide a comprehensive analysis of this phenomenon and design a multi-temperature voting method that reduces the overhead of temperature scaling. Overall, our findings suggest that TTS is more powerful than previously thought, and that temperature scaling offers a simple and effective way to unlock the latent potential of base models. Large language models (LLMs) have demonstrated strong reasoning capabilities for complex problems at test time (Wei et al., 2022). As illustrated in Figure 1a, two main approaches have emerged to achieve such reasoning. The first trains models to produce long reasoning traces with self-reflection and correction, often implemented through reinforcement learning (RL) (Guo et al., 2025a; Y ang et al., 2025c). While effective, this approach requires costly and time-consuming training (Liu et al., 2025a). The second, known as test-time scaling (TTS) (Brown et al., 2024; Snell et al., 2025; Zhao et al., 2025), shifts the burden to inference: the model generates multiple reasoning traces in parallel and a verifier selects the most reliable one (Saad-Falcon et al., 2025a).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found