Learning to Reason in LLMs by Expectation Maximization
Lee, Junghyun, Kveton, Branislav, Choudhary, Sunav, Mukherjee, Subhojyoti, Rao, Anup, Rossi, Ryan A., Siu, Alexa
Large language models (LLMs) solve reasoning problems by first generating a rationale and then answering. We formalize reasoning as a latent variable model and derive an expectation-maximization (EM) objective for learning to reason. This view connects EM and modern reward-based optimization, and shows that the main challenge lies in designing a sampling distribution that generates rationales that justify correct answers. We instantiate and compare several sampling schemes: rejection sampling with a budget, self-taught reasoner (STaR), and prompt posterior sampling (PPS), which only keeps the rationalization stage of STaR. Our experiments on the ARC, MMLU, and OpenBookQA datasets with the Llama and Qwen models show that the sampling scheme can significantly affect the accuracy of learned reasoning models. Despite its simplicity, we observe that PPS outperforms the other sampling schemes.
Dec-24-2025
- Country:
- Asia > South Korea
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Netherlands > South Holland
- Dordrecht (0.04)
- Belgium > Brussels-Capital Region
- North America > United States
- California > Santa Clara County > San Jose (0.04)
- Genre:
- Research Report (0.52)
- Technology: