Collaboration Promotes Group Resilience in Multi-Agent AI
Keren, Sarah, Gerstgrasser, Matthias, Abu, Ofir, Rosenschein, Jeffrey
–arXiv.org Artificial Intelligence
Reinforcement Learning (RL) agents are typically required to operate in dynamic environments, and must develop an ability to quickly adapt to unexpected perturbations in their environment. Promoting this ability is hard, even in single-agent settings Padakandla (2020). For a group this is even more challenging; in addition to the dynamic nature of the environment, agents need to deal with high variance caused by changes in the behavior of other agents. Unsurprisingly, many recent Multi-Agent RL (MARL) works have shown the beneficial effect collaboration between agents has on their performance Xu, Rao, and Bu (2012); Foerster et al. (2016); Lowe et al. (2017); Qian et al. (2019); Jaques et al. (2019); Christianos, Schäfer, and Albrecht (2020). Our objective is to highlight the relationship between a group's ability to collaborate effectively and the group's resilience, which we measure as the group's ability to adapt to perturbations in the environment. Thus, agents that collaborate not only increase their expected utility in a given environment, but are also able to recover a larger fraction of the previous performance after a perturbation occurs. Contrary to investigations of transfer learning Zhu, Lin, and Zhou (2020); Liang and Li (2020) or curriculum learning Portelas et al. (2020), we do not have a stationary target domain in which
arXiv.org Artificial Intelligence
Dec-9-2022