Communicating Plans, Not Percepts: Scalable Multi-Agent Coordination with Embodied World Models
Hill, Brennen A., Wei, Mant Koh En, Jishnuanandh, Thangavel
–arXiv.org Artificial Intelligence
Robust coordination is critical for effective decision-making in multi-agent systems, especially under partial observability. A central question in Multi-Agent Reinforcement Learning (MARL) is whether to engineer communication protocols or learn them end-to-end. We investigate this dichotomy using embodied world models. We propose and compare two communication strategies for a cooperative task-allocation problem. The first, Learned Direct Communication (LDC), learns a protocol end-to-end. The second, Intention Communication, uses an engineered inductive bias: a compact, learned world model, the Imagined Trajectory Generation Module (ITGM), which uses the agent's own policy to simulate future states. A Message Generation Network (MGN) then compresses this plan into a message. We evaluate these approaches on goal-directed interaction in a grid world, a canonical abstraction for embodied AI problems, while scaling environmental complexity. Our experiments reveal that while emergent communication is viable in simple settings, the engineered, world model-based approach shows superior performance, sample efficiency, and scalability as complexity increases. These findings advocate for integrating structured, predictive models into MARL agents to enable active, goal-driven coordination.
arXiv.org Artificial Intelligence
Nov-25-2025
- Country:
- Asia > Singapore
- Central Region > Singapore (0.04)
- North America > United States
- Wisconsin > Dane County > Madison (0.14)
- Asia > Singapore
- Genre:
- Research Report > New Finding (0.46)