Scalable Multi Agent Diffusion Policies for Coverage Control
Vatnsdal, Frederic, Camargo, Romina Garcia, Agarwal, Saurav, Ribeiro, Alejandro
–arXiv.org Artificial Intelligence
Abstract--We propose MADP, a novel diffusion-model-based approach for collaboration in decentralized robot swarms. MADP leverages diffusion models to generate samples from complex and high-dimensional action distributions that capture the interdependencies between agents' actions. Each robot conditions policy sampling on a fused representation of its own observations and perceptual embeddings received from peers. T o evaluate this approach, we task a team of holonomic robots piloted by MADP to address coverage control--a canonical multi agent navigation problem. The policy is trained via imitation learning from a clairvoyant expert on the coverage control problem, with the diffusion process parameterized by a spatial transformer architecture to enable decentralized inference. We evaluate the system under varying numbers, locations, and variances of importance density functions, capturing the robustness demands of real-world coverage tasks. Experiments demonstrate that our model inherits valuable properties from diffusion models, generalizing across agent densities and environments, and consistently outperforming state-of-the-art baselines.
arXiv.org Artificial Intelligence
Sep-23-2025
- Country:
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.04)
- Genre:
- Research Report (0.66)
- Technology: