MAESTRO: Open-Ended Environment Design for Multi-Agent Reinforcement Learning
Samvelyan, Mikayel, Khan, Akbir, Dennis, Michael, Jiang, Minqi, Parker-Holder, Jack, Foerster, Jakob, Raileanu, Roberta, Rocktäschel, Tim
–arXiv.org Artificial Intelligence
Open-ended learning methods that automatically generate a curriculum of increasingly challenging tasks serve as a promising avenue toward generally capable reinforcement learning agents. Existing methods adapt curricula independently over either environment parameters (in single-agent settings) or co-player policies (in multi-agent settings). However, the strengths and weaknesses of co-players can manifest themselves differently depending on environmental features. It is thus crucial to consider the dependency between the environment and co-player when shaping a curriculum in multi-agent domains. In this work, we use this insight and extend Unsupervised Environment Design (UED) to multi-agent environments. We then introduce Multi-Agent Environment Design Strategist for Open-Ended Learning (MAESTRO), the first multi-agent UED approach for two-player zero-sum settings. MAESTRO efficiently produces adversarial, joint curricula over both environments and co-players and attains minimax-regret guarantees at Nash equilibrium. Our experiments show that MAESTRO outperforms a number of strong baselines on competitive two-player games, spanning discrete and continuous control settings.
arXiv.org Artificial Intelligence
Mar-6-2023
- Country:
- Asia (1.00)
- Europe (1.00)
- North America > United States (0.46)
- Genre:
- Research Report (0.82)
- Industry:
- Education (0.93)
- Leisure & Entertainment
- Games (1.00)
- Sports > Motorsports
- Formula One (1.00)
- Technology: