Goto

Collaborating Authors

 modc



Mode-Conditioning Unlocks Superior Test-Time Scaling

Wu, Chen Henry, Goyal, Sachin, Raghunathan, Aditi

arXiv.org Artificial Intelligence

Parallel sampling promises substantial gains in test-time scaling, but its effectiveness is sharply limited by diversity collapse, where models concentrate on a few modes and repeated samples produce the same mistakes. We propose the mode-conditioning (ModC) framework, which explicitly allocates test-time compute across reasoning modes using either specialist models or mode-specific prefixes. ModC consistently improves scaling across controlled graph-search tasks and large-scale reasoning benchmarks, spanning model families and sizes from 0.5B to 7B. On OpenThoughts, fine-tuning Qwen2.5-7B with ModC achieves a 4x efficiency gain over standard training while also improving the maximum attainable Pass@k. We further show that gradient clustering enables ModC without explicit mode labels, yielding up to 10% gains on datasets such as NuminaMath. Finally, we show that ModC improves reinforcement learning (RL) and can further boost diversity-inducing RL methods. These results demonstrate that standard training underutilizes the diversity in data, and that ModC provides a simple, effective remedy for unlocking the full benefits of diversity in test-time scaling.


remarks, and improved experimental results on CIFAR10-binary, finding a model with 76.83% accuracy and WM2 2KB and a model with 74.87% accuracy and WM,MS2KB, both of which outperform Bonsai

Neural Information Processing Systems

We thank the reviewers for their valuable feedback. This rebuttal includes further experiments to address the reviewers' These ablation results support the design choices made in SpArSe in the context of memory constrained MCUs. On MNIST, SpArSe achieves accuracy of 99.17% with 1.45e3 parameters, compared to 99.15% accuracy SpArSe would not work with the design choices made in previous NAS works, especially [23]. Reproducability (R1) We are happy to make the implementation publicly available upon acceptance. We argue that: 1) SpArSe addresses a significant gap in the community, i.e. model design for V alidity of claim on line 66 (R1) Our claim is true for WM 2KB, but we will revise that sentence for clarity.