Goto

Collaborating Authors

 blotto


Appendices Contents Appendices 18

Neural Information Processing Systems

To investigate further, we ran several instances of FP and SFP from random starting points (i.e. initial policy generated by normalizing uniformly drawn random numbers); results are



Pick Your Battles: Interaction Graphs as Population-Level Objectives for Strategic Diversity

Garnelo, Marta, Czarnecki, Wojciech Marian, Liu, Siqi, Tirumala, Dhruva, Oh, Junhyuk, Gidel, Gauthier, van Hasselt, Hado, Balduzzi, David

arXiv.org Artificial Intelligence

Strategic diversity is often essential in games: in multi-player games, for example, evaluating a player against a diverse set of strategies will yield a more accurate estimate of its performance. Furthermore, in games with non-transitivities diversity allows a player to cover several winning strategies. However, despite the significance of strategic diversity, training agents that exhibit diverse behaviour remains a challenge. In this paper we study how to construct diverse populations of agents by carefully structuring how individuals within a population interact. Our approach is based on interaction graphs, which control the flow of information between agents during training and can encourage agents to specialise on different strategies, leading to improved overall performance. We provide evidence for the importance of diversity in multi-agent training and analyse the effect of applying different interaction graphs on the training trajectories, diversity and performance of populations in a range of games. This is an extended version of the long abstract published at AAMAS.


Navigating the landscape of multiplayer games

#artificialintelligence

Multiplayer games have long been used as testbeds in artificial intelligence research, aptly referred to as the Drosophila of artificial intelligence. Traditionally, researchers have focused on using well-known games to build strong agents. This progress, however, can be better informed by characterizing games and their topological landscape. Tackling this latter question can facilitate understanding of agents and help determine what game an agent should target next as part of its training. Here, we show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games, quantifying relationships between games of varying sizes and characteristics. We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another. Our results culminate in a demonstration leveraging this information to generate new and interesting games, including mixtures of empirical games synthesized from real world games. Multiplayer games can be used as testbeds for the development of learning algorithms for artificial intelligence. Omidshafiei et al. show how to characterize and compare such games using a graph-based approach, generating new games that could potentially be interesting for training in a curriculum.


Learning to Play No-Press Diplomacy with Best Response Policy Iteration

Anthony, Thomas, Eccles, Tom, Tacchetti, Andrea, Kramár, János, Gemp, Ian, Hudson, Thomas C., Porcel, Nicolas, Lanctot, Marc, Pérolat, Julien, Everett, Richard, Werpachowski, Roman, Singh, Satinder, Graepel, Thore, Bachrach, Yoram

arXiv.org Artificial Intelligence

Recent advances in deep reinforcement learning (RL) have led to considerable progress in many 2-player zero-sum games, such as Go, Poker and Starcraft. The purely adversarial nature of such games allows for conceptually simple and principled application of RL methods. However real-world settings are many-agent, and agent interactions are complex mixtures of common-interest and competitive aspects. We consider Diplomacy, a 7-player board game designed to accentuate dilemmas resulting from many-agent interactions. It also features a large combinatorial action space and simultaneous moves, which are challenging for RL algorithms. We propose a simple yet effective approximate best response operator, designed to handle large combinatorial action spaces and simultaneous moves. We also introduce a family of policy iteration methods that approximate fictitious play. With these methods, we successfully apply RL to Diplomacy: we show that our agents convincingly outperform the previous state-of-the-art, and game theoretic equilibrium analysis shows that the new process yields consistent improvements.


Navigating the Landscape of Multiplayer Games to Probe the Drosophila of AI

Omidshafiei, Shayegan, Tuyls, Karl, Czarnecki, Wojciech M., Santos, Francisco C., Rowland, Mark, Connor, Jerome, Hennes, Daniel, Muller, Paul, Perolat, Julien, De Vylder, Bart, Gruslys, Audrunas, Munos, Remi

arXiv.org Artificial Intelligence

Multiplayer games have a long history in being used as key testbeds for evaluation and training in artificial intelligence (AI), aptly referred to as the "Drosophila of AI". Traditionally, researchers have focused on using games to build strong AI agents that, e.g., achieve human-level performance. This progress, however, also requires a classification of how 'interesting' a game is for an artificial agent, which requires characterization of games and their topological landscape. Tackling this latter question not only facilitates an understanding of the characteristics of learnt AI agents in games, but can also help determine what game an AI should address next as part of its training. Here, we show how network measures applied to so-called response graphs of large-scale games enable the creation of a useful landscape of games, quantifying the relationships between games of widely varying sizes, characteristics, and complexities. We illustrate our findings in various domains, ranging from well-studied canonical games to significantly more complex empirical games capturing the performance of trained AI agents pitted against one another. Our results culminate in a demonstration of how one can leverage this information to automatically generate new and interesting games, including mixtures of empirical games synthesized from real world games.