correctness
Knowing When to Quit: A Principled Framework for Dynamic Abstention in LLM Reasoning
Davidov, Hen, Cohen, Nachshon, Kalinsky, Oren, Fairstein, Yaron, Kushilevitz, Guy, Yazdi, Ram, Rebeschini, Patrick
Large language models (LLMs) using chain-of-thought reasoning often waste substantial compute by producing long, incorrect responses. Abstention can mitigate this by withholding outputs unlikely to be correct. While most abstention methods decide to withhold outputs before or after generation, dynamic mid-generation abstention considers early termination of unpromising reasoning traces at each token position. Prior work has explored empirical variants of this idea, but principled guidance for the abstention rule remains lacking. We present a formal analysis of dynamic abstention for LLMs, modeling abstention as an explicit action within a regularized reinforcement learning framework. An abstention reward parameter controls the trade-off between compute and information. We show that abstaining when the value function falls below this reward strictly outperforms natural baselines under general conditions. We further derive a principled and efficient method to approximate the value function. Empirical results on mathematical reasoning and toxicity avoidance tasks support our theory and demonstrate improved selective accuracy over existing methods.
- Europe > Monaco (0.04)
- Asia > Middle East > Jordan (0.04)
FUSE: Ensembling Verifiers with Zero Labeled Data
Lee, Joonhyuk, Ma, Virginia, Zhao, Sarah, Nair, Yash, Spector, Asher, Cohen, Regev, Candès, Emmanuel J.
Verification of model outputs is rapidly emerging as a key primitive for both training and real-world deployment of large language models (LLMs). In practice, this often involves using imperfect LLM judges and reward models since ground truth acquisition can be time-consuming and expensive. We introduce Fully Unsupervised Score Ensembling (FUSE), a method for improving verification quality by ensembling verifiers without access to ground truth correctness labels. The key idea behind FUSE is to control conditional dependencies between verifiers in a manner that improves the unsupervised performance of a class of spectral algorithms from the ensembling literature. Despite requiring zero ground truth labels, FUSE typically matches or improves upon semi-supervised alternatives in test-time scaling experiments with diverse sets of generator models, verifiers, and benchmarks. In particular, we validate our method on both conventional academic benchmarks such as GPQA Diamond and on frontier, unsaturated benchmarks such as Humanity's Last Exam and IMO Shortlist questions.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Spain > Andalusia > Cádiz Province > Cadiz (0.04)
- Asia > Middle East > Lebanon (0.04)
- Asia > China (0.04)
The secret project to settle controversial maths proof with a computer
One of the most bitterly contested proofs in modern mathematics may be on the verge of being untangled. Two projects, both aiming to use a computer program to cast new light on the controversy, are now up and running - with one having operated in secret for more than two years already. The developments are a positive sign that the row might find a solution, say mathematicians. The saga began in 2012 when Shinichi Mochizuki at Kyoto University, Japan, claimed to have proved a famous idea called the ABC conjecture, posting a 500-page proof online. The conjecture is simple to state, concerning prime numbers involved in solutions to the equation a + b = c and how these numbers relate to each other.
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.25)
- North America > Canada > Alberta (0.15)
- Asia > Middle East > Iran (0.05)
- (3 more...)
When Stability Fails: Hidden Failure Modes Of LLMS in Data-Constrained Scientific Decision-Making
Large language models (LLMs) are increasingly used as decision-support tools in data-constrained scientific workflows, where correctness and validity are critical. However, evaluation practices often emphasize stability or reproducibility across repeated runs. While these properties are desirable, stability alone does not guar- antee agreement with statistical ground truth when such references are available. We introduce a controlled behavioral evaluation framework that explicitly sep- arates four dimensions of LLM decision-making: stability, correctness, prompt sensitivity, and output validity under fixed statistical inputs. We evaluate multi- ple LLMs using a statistical gene prioritization task derived from differential ex- pression analysis across prompt regimes involving strict and relaxed significance thresholds, borderline ranking scenarios, and minor wording variations. Our ex- periments show that LLMs can exhibit near-perfect run-to-run stability while sys- tematically diverging from statistical ground truth, over-selecting under relaxed thresholds, responding sharply to minor prompt wording changes, or producing syntactically plausible gene identifiers absent from the input table. Although sta- bility reflects robustness across repeated runs, it does not guarantee agreement with statistical ground truth in structured scientific decision tasks. These findings highlight the importance of explicit ground-truth validation and output validity checks when deploying LLMs in automated or semi-automated scientific work- flows.
- Health & Medicine > Therapeutic Area > Pulmonary/Respiratory Diseases (0.46)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (13 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (2 more...)
- North America > United States > Arizona > Maricopa County > Tempe (0.04)
- North America > United States > Colorado > Larimer County > Fort Collins (0.04)
- Europe > Czechia > Prague (0.04)
- North America > United States > Arizona (0.04)
- North America > United States > Colorado (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Czechia > Prague (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Oceania > Australia (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (5 more...)
- Research Report > Experimental Study (1.00)
- Workflow (0.93)
- Research Report > New Finding (0.68)
Checklist
Do the main claims made in the abstract and introduction accurately reflect the paper's Did you describe the limitations of your work? Did you specify all the training details (e.g., data splits, hyperparameters, how they Did you report error bars (e.g., with respect to the random seed after running experi-20 Did you include the total amount of compute and the type of resources used (e.g., type If your work uses existing assets, did you cite the creators? Did you mention the license of the assets? Did you include any new assets either in the supplemental material or as a URL? [Y es] Did you discuss whether and how consent was obtained from people whose data you're We thereby state that we bear all responsibility in case of violation of rights, etc., and confirmation of F or what purpose was the dataset created? - For the novel task of data analysis as explained Who created the dataset and on behalf of which entity? - This dataset is created during a Who funded the creation of the dataset? What do the instances that comprise the dataset represent?
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- Banking & Finance (0.96)
- Health & Medicine (0.94)
- North America > United States (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (0.94)
- Education > Educational Setting (0.46)
- Education > Curriculum > Subject-Specific Education (0.46)