Goto

Collaborating Authors

 knowledge reinforcement learning



Multi-Agent Common Knowledge Reinforcement Learning

Neural Information Processing Systems

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree. Higher levels in the hierarchy coordinate groups of agents by conditioning on their common knowledge, or delegate to lower levels with smaller subgroups but potentially richer common knowledge. The entire policy tree can be executed in a fully decentralised fashion. As the lowest policy tree level consists of independent policies for each agent, MACKRL reduces to independently learnt decentralised policies as a special case. We demonstrate that our method can exploit common knowledge for superior performance on complex decentralised coordination tasks, including a stochastic matrix game and challenging problems in StarCraft II unit micromanagement.


Reviews: Multi-Agent Common Knowledge Reinforcement Learning

Neural Information Processing Systems

My two biggest complaints center on 1) the illustrative single-step matrix game of section 4.1 and figure 3 and 2) the practical applications of MACKRL. 1) Since the primary role of the single-step matrix game in section 4.1 is illustrative, it should be much clearer what is going on. How are all 3 policies parameterized? What information does each have access to? What is the training data? First, let's focus on the JAL policy. As presented up until this point in the paper, JAL means centralized training *and* execution.


Reviews: Multi-Agent Common Knowledge Reinforcement Learning

Neural Information Processing Systems

All reviewers agreed this paper is well written presenting some interesting novel ideas. Reviewers believe that integrating common knowledge directly into Multi-agent RL training is a nice idea, and suggests some interesting future directions of research. Initially, there were shared concerns and confusion though about a number of issues, most prominently about the matrix game example. After reading and discussing the authors rebuttal though it seems the authors adequately addressed some of the primary concerns, and the general sense is that this paper is solid and of interest to be presented at NeurIPS.


Multi-Agent Common Knowledge Reinforcement Learning

Neural Information Processing Systems

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree.


Multi-Agent Common Knowledge Reinforcement Learning

Witt, Christian Schroeder de, Foerster, Jakob, Farquhar, Gregory, Torr, Philip, Boehmer, Wendelin, Whiteson, Shimon

Neural Information Processing Systems

Cooperative multi-agent reinforcement learning often requires decentralised policies, which severely limit the agents' ability to coordinate their behaviour. In this paper, we show that common knowledge between agents allows for complex decentralised coordination. Common knowledge arises naturally in a large number of decentralised cooperative multi-agent tasks, for example, when agents can reconstruct parts of each others' observations. Since agents can independently agree on their common knowledge, they can execute complex coordinated policies that condition on this knowledge in a fully decentralised fashion. We propose multi-agent common knowledge reinforcement learning (MACKRL), a novel stochastic actor-critic algorithm that learns a hierarchical policy tree.