Exploring Communication Strategies for Collaborative LLM Agents in Mathematical Problem-Solving
Zhang, Liang, Zhai, Xiaoming, Lin, Jionghao, Lin, Jionghao, Kleiman, Jennifer, Zapata-Rivera, Diego, Forsyth, Carol, Jiang, Yang, Hu, Xiangen, Graesser, Arthur C.
–arXiv.org Artificial Intelligence
Large Language Model (LLM) agents are increasingly utilized in AI-aided education to support tutoring and learning. Effective communication strategies among LLM agents improve collaborative problem-solving efficiency and facilitate cost-effective adoption in education. However, little research has systematically evaluated the impact of different communication strategies on agents' problem-solving. Our study examines four communication modes, \textit{teacher-student interaction}, \textit{peer-to-peer collaboration}, \textit{reciprocal peer teaching}, and \textit{critical debate}, in a dual-agent, chat-based mathematical problem-solving environment using the OpenAI GPT-4o model. Evaluated on the MATH dataset, our results show that dual-agent setups outperform single agents, with \textit{peer-to-peer collaboration} achieving the highest accuracy. Dialogue acts like statements, acknowledgment, and hints play a key role in collaborative problem-solving. While multi-agent frameworks enhance computational tasks, effective communication strategies are essential for tackling complex problems in AI education.
arXiv.org Artificial Intelligence
Jul-25-2025
- Country:
- Asia
- Europe > Italy
- North America > United States
- Georgia > Clarke County
- Athens (0.15)
- New Jersey > Mercer County
- Princeton (0.04)
- Tennessee > Shelby County
- Memphis (0.05)
- Georgia > Clarke County
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Technology: