Goto

Collaborating Authors

 reasoning learner




A Proofs Lemma 1. For the mixed imaged opponent policy (IOP) π

Neural Information Processing Systems

According to Bayes' theorem, as we update the posterior probability as The changing trends of α are diverse when against different opponents. IOP to accurately model the opponent policy. Figure 7: Performance against different types of opponents, i.e., fixed policy, naïve learner, and Figure 8: Performance against different types of opponents, i.e., fixed policy, naïve learner, and Note that M = 1 is MBOM w/o IOPs. Figure 9: Performance against different types of opponents, i.e., fixed policy, naïve learner, and reasoning learner in Predator-Prey, where x -axis is joint opponent index. Figure 9 shows the performance when against different types of opponents compared with the baselines. For each type, there are ten test joint opponent policies.



Model-Based Opponent Modeling

Yu, Xiaopeng, Jiang, Jiechuan, Jiang, Haobin, Lu, Zongqing

arXiv.org Artificial Intelligence

When one agent interacts with a multi-agent environment, it is challenging to deal with various opponents unseen before. Modeling the behaviors, goals, or beliefs of opponents could help the agent adjust its policy to adapt to different opponents. In addition, it is also important to consider opponents who are learning simultaneously or capable of reasoning. However, existing work usually tackles only one of the aforementioned types of opponent. In this paper, we propose model-based opponent modeling (MBOM), which employs the environment model to adapt to all kinds of opponent. MBOM simulates the recursive reasoning process in the environment model and imagines a set of improving opponent policies. To effectively and accurately represent the opponent policy, MBOM further mixes the imagined opponent policies according to the similarity with the real behaviors of opponents. Empirically, we show that MBOM achieves more effective adaptation than existing methods in competitive and cooperative environments, respectively with different types of opponent, i.e., fixed policy, na\"ive learner, and reasoning learner.