Limits of Emergent Reasoning of Large Language Models in Agentic Frameworks for Deterministic Games
Su, Chris, Li, Harrison, Marques, Matheus, Flint, George, Zhu, Kevin, Dev, Sunishchal
–arXiv.org Artificial Intelligence
Recent work reports that Large Reasoning Models (LRMs) undergo a collapse in performance on solving puzzles beyond certain perplexity thresholds. In subsequent discourse, questions have arisen as to whether the nature of the task muddles an evaluation of true reasoning. One potential confound is the requirement that the model keep track of the state space on its own. We provide a large language model (LLM) with an environment interface for Tower of Hanoi problems, allowing it to make a move with a tool call, provide written justification, observe the resulting state space, and reprompt itself for the next move. We observe that access to an environment interface does not delay or eradicate performance collapse. Furthermore, LLM-parameterized policy analysis reveals increasing divergence from both optimal policies and uniformly random policies, suggesting that the model exhibits mode-like collapse at each level of complexity, and that performance is dependent upon whether the mode reflects the correct solution for the problem. We suggest that a similar phenomena might take place in LRMs.
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- Asia > Vietnam
- North America > United States
- California > Alameda County
- Berkeley (0.14)
- New Jersey > Mercer County
- Ewing (0.14)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- California > Alameda County
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.68)
- Research Report
- Industry:
- Law (0.34)
- Technology: