Emergent Coordination Through Competition
Liu, Siqi, Lever, Guy, Merel, Josh, Tunyasuvunakool, Saran, Heess, Nicolas, Graepel, Thore
–arXiv.org Artificial Intelligence
We study the emergence of cooperative behaviors in reinforcement learning agents by introducing a challenging competitive multi-agent soccer environment with continuous simulated physics. We demonstrate that decentralized, population-based training with co-play can lead to a progression in agents' behaviors: from random, to simple ball chasing, and finally showing evidence of cooperation. Our study highlights several of the challenges encountered in large scale multi-agent training in continuous control. In particular, we demonstrate that the automatic optimization of simple shaping rewards, not themselves conducive to co-operative behavior, can lead to long-horizon team behavior. We further apply an evaluation scheme, grounded by game theoretic principals, that can assess agent performance in the absence of pre-defined evaluation tasks or human baselines.
arXiv.org Artificial Intelligence
Feb-21-2019
- Country:
- Asia > China
- Europe
- Portugal (0.04)
- Slovenia > Upper Carniola
- Municipality of Bled > Bled (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Greater London > London (0.04)
- North America
- Canada > British Columbia
- United States
- California
- Los Angeles County > Long Beach (0.04)
- San Diego County > San Diego (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New York (0.04)
- Texas > Travis County
- Austin (0.04)
- Washington > King County
- Seattle (0.04)
- California
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Genre:
- Research Report (0.64)
- Industry:
- Technology: