Multi-Agent Tool-Integrated Policy Optimization
Mo, Zhanfeng, Li, Xingxuan, Chen, Yuntao, Bing, Lidong
–arXiv.org Artificial Intelligence
Large language models (LLMs) increasingly rely on multi-turn tool-integrated planning for knowledge-intensive and complex reasoning tasks. Existing implementations typically rely on a single agent, but they suffer from limited context length and noisy tool responses. A natural solution is to adopt a multi-agent framework with planner- and worker-agents to manage context. However, no existing methods support effective reinforcement learning post-training of tool-integrated multi-agent frameworks. To address this gap, we propose Multi-Agent Tool-Integrated Policy Optimization (MATPO), which enables distinct roles (planner and worker) to be trained within a single LLM instance using role-specific prompts via reinforcement learning. MATPO is derived from a principled credit assignment mechanism across planner and worker rollouts. This design eliminates the need to deploy multiple LLMs, which would be memory-intensive, while preserving the benefits of specialization. Experiments on GAIA-text, WebWalkerQA, and FRAMES show that MATPO consistently outperforms single-agent baselines by an average of 18.38% relative improvement in performance and exhibits greater robustness to noisy tool outputs. Our findings highlight the effectiveness of unifying multiple agent roles within a single LLM and provide practical insights for stable and efficient multi-agent RL training.
arXiv.org Artificial Intelligence
Oct-7-2025
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- New York (0.04)
- California > San Francisco County
- North America > United States
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Education > Educational Setting (0.66)
- Government > Regional Government
- Law
- Business Law > Bankruptcy Law (0.46)
- Government & the Courts (0.46)
- Technology: