Budget-aware Test-time Scaling via Discriminative Verification
Montgomery, Kyle, Tan, Sijun, Chen, Yuqi, Zhuang, Siyuan, Zhang, Tianjun, Popa, Raluca Ada, Wang, Chenguang
–arXiv.org Artificial Intelligence
Test-time scaling is a powerful strategy for boosting the performance of large language models on complex reasoning tasks. While state-of-the-art approaches often employ generative verifiers to select the best solution from a pool of candidates, this method incurs prohibitive computational costs, limiting its practicality. In this work, we shift the focus to a more budget-aware paradigm: discriminative verification. We conduct a thorough empirical analysis and demonstrate that while discriminative verifiers may underperform in isolation, combining them with self-consistency in a hybrid approach creates a powerful and efficient test-time scaling mechanism. Notably, under a fixed compute budget, this hybrid approach surpasses state-of-the-art generative verification by a significant margin: achieving up to 15.3\% higher accuracy on AIME2025. Our findings establish that for practical, real-world applications, budget-aware scaling with discriminative verifiers is not only a "free" upgrade over self-consistency, but also a more effective and efficient alternative to costly generative techniques. Code is available at https://github.com/wang-research-lab/verification.
arXiv.org Artificial Intelligence
Oct-17-2025
- Country:
- Asia > Middle East
- Iraq > Basra Governorate
- Basra (0.04)
- Jordan (0.04)
- Iraq > Basra Governorate
- North America > United States
- Michigan > Washtenaw County > Ann Arbor (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.88)
- Technology: