Asynchronous Cooperative Multi-Agent Reinforcement Learning with Limited Communication

Dolan, Sydney, Nayak, Siddharth, Aloor, Jasmine Jerry, Balakrishnan, Hamsa

arXiv.org Artificial Intelligence 

Communication is crucial in cooperative multi-agent systems with partial observability, as it enables a better understanding of the environment and improves coordination. In extreme environments such as those underwater or in space, the frequency of communication between agents is often limited [1, 2]. For example, a satellite may not be able to reliably receive and react to messages from other satellites synchronously due to limited onboard power and communication delays. In these scenarios, agents aim to establish a communication protocol that allows them to operate independently while still receiving sufficient information to effectively coordinate with nearby agents. Multi-agent reinforcement learning (MARL) has emerged as a popular approach for addressing cooperative navigation challenges involving multiple agents.