Solving Situation Puzzles with Large Language Model and External Reformulation
Li, Kun, Chen, Xinwei, Song, Tianyou, Zhou, Chengrui, Liu, Zhuoran, Zhang, Zhenyan, Guo, Jiangjian, Shan, Qing
–arXiv.org Artificial Intelligence
In recent years, large language models (LLMs) have shown an impressive ability to perform arithmetic and symbolic reasoning tasks. However, we found that LLMs (e.g., ChatGPT) cannot perform well on reasoning that requires multiple rounds of dialogue, especially when solving situation puzzles. Specifically, LLMs intend to ask very detailed questions focusing on a specific aspect or same/similar questions after several rounds of Q&As. To help LLMs get out of the above dilemma, we propose a novel external reformulation methodology, where the situation puzzle will be reformulated after several rounds of Q&A or when the LLMs raise an incorrect guess. Experiments show superior performance (e.g., win rate, number of question/guess attempts) of our method than directly using LLMs for solving situation puzzles, highlighting the potential of strategic problem reformulation to enhance the reasoning capabilities of LLMs in complex interactive scenarios.
arXiv.org Artificial Intelligence
Mar-24-2025
- Country:
- North America > United States
- California > San Diego County
- La Jolla (0.04)
- Illinois > Champaign County
- New York
- Bronx County > New York City (0.04)
- Kings County > New York City (0.04)
- New York County > New York City (0.04)
- Queens County > New York City (0.04)
- Richmond County > New York City (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- Washington > King County
- Seattle (0.04)
- California > San Diego County
- South America > Peru (0.04)
- North America > United States
- Genre:
- Research Report (0.82)
- Industry:
- Health & Medicine (0.46)
- Technology: