LLMs cannot spot math errors, even when allowed to peek into the solution
Srivatsa, KV Aditya, Maurya, Kaushal Kumar, Kochmar, Ekaterina
–arXiv.org Artificial Intelligence
Large language models (LLMs) demonstrate remarkable performance on math word problems, yet they have been shown to struggle with meta-reasoning tasks such as identifying errors in student solutions. In this work, we investigate the challenge of locating the first error step in stepwise solutions using two error reasoning datasets: VtG and PRM800K. Our experiments show that state-of-the-art LLMs struggle to locate the first error step in student solutions even when given access to the reference solution. To that end, we propose an approach that generates an intermediate corrected student solution, aligning more closely with the original student's solution, which helps improve performance.
arXiv.org Artificial Intelligence
Sep-3-2025
- Country:
- Asia
- Middle East
- Jordan (0.04)
- Saudi Arabia > Asir Province
- Abha (0.04)
- UAE > Abu Dhabi Emirate
- Abu Dhabi (0.14)
- Singapore (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East
- Europe > Monaco (0.04)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California > San Francisco County
- San Francisco (0.14)
- Florida > Miami-Dade County
- Miami (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- California > San Francisco County
- Mexico > Mexico City
- Asia
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (1.00)
- Research Report
- Industry:
- Education (1.00)
- Technology: