Multi-Agent Guided Policy Optimization
Li, Yueheng, Xie, Guangming, Lu, Zongqing
–arXiv.org Artificial Intelligence
Due to practical constraints such as partial observability and limited communication, Centralized Training with Decentralized Execution (CTDE) has become the dominant paradigm in cooperative Multi-Agent Reinforcement Learning (MARL). However, existing CTDE methods often underutilize centralized training or lack theoretical guarantees. We propose Multi-Agent Guided Policy Optimization (MAGPO), a novel framework that better leverages centralized training by integrating centralized guidance with decentralized execution. MAGPO uses an auto-regressive joint policy for scalable, coordinated exploration and explicitly aligns it with decentralized policies to ensure deployability under partial observability. We provide theoretical guarantees of monotonic policy improvement and empirically evaluate MAGPO on 43 tasks across 6 diverse environments. Results show that MAGPO consistently outperforms strong CTDE baselines and matches or surpasses fully centralized approaches, offering a principled and practical solution for decentralized multi-agent learning. Our code and experimental data can be found in https://github.com/liyheng/MAGPO.
arXiv.org Artificial Intelligence
Jul-25-2025
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > Austria (0.04)
- North America > United States
- California > San Francisco County > San Francisco (0.14)
- Oceania > New Zealand
- North Island > Auckland Region > Auckland (0.04)
- South America > Brazil
- São Paulo (0.04)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Education (0.46)