Goto

Collaborating Authors

 infopg


New framework for cooperative bots aims to mimic high-performing human teams

AIHub

A Georgia Institute of Technology research group in the School of Interactive Computing has developed a robotics system for collaborative bots that work independently to achieve a shared goal. The system intelligently increases the information shared among the bots and allows for improved cooperation. The aim is to model high-functioning human teams. It also creates resiliency against bad or unreliable team bots that may hinder the overall programmed goal. "Intuitively, the idea behind our new framework -- InfoPG -- is that a robot agent goes back-and-forth on what it thinks it should do with their teammates, and then the teammates will update on what they think is best to do," said Esmaeil Seraj, Ph.D. student in the CORE Robotics Lab and researcher on the project.


Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming

Konan, Sachin, Seraj, Esmaeil, Gombolay, Matthew

arXiv.org Artificial Intelligence

Information sharing is key in building team cognition and enables coordination and cooperation. High-performing human teams also benefit from acting strategically with hierarchical levels of iterated communication and rationalizability, meaning a human agent can reason about the actions of their teammates in their decision-making. Yet, the majority of prior work in Multi-Agent Reinforcement Learning (MARL) does not support iterated rationalizability and only encourage inter-agent communication, resulting in a suboptimal equilibrium cooperation strategy. In this work, we show that reformulating an agent's policy to be conditional on the policies of its neighboring teammates inherently maximizes Mutual Information (MI) lower-bound when optimizing under Policy Gradient (PG). Building on the idea of decision-making under bounded rationality and cognitive hierarchy theory, we show that our modified PG approach not only maximizes local agent rewards but also implicitly reasons about MI between agents without the need for any explicit ad-hoc regularization terms. Our approach, InfoPG, outperforms baselines in learning emergent collaborative behaviors and sets the state-of-the-art in decentralized cooperative MARL tasks. Our experiments validate the utility of InfoPG by achieving higher sample efficiency and significantly larger cumulative reward in several complex cooperative multi-agent domains.