Cooperative Exploration for Multi-Agent Deep Reinforcement Learning
Liu, Iou-Jen, Jain, Unnat, Yeh, Raymond A., Schwing, Alexander G.
–arXiv.org Artificial Intelligence
Exploration is critical for good results in deep reinforcement learning and has attracted much attention. However, existing multi-agent deep reinforcement learning algorithms still use mostly noise-based techniques. Very recently, exploration methods that consider cooperation among multiple agents have been developed. However, existing methods suffer from a common challenge: agents struggle to identify states that are worth exploring, and hardly coordinate exploration efforts toward those states. To address this shortcoming, in this paper, we propose cooperative multi-agent exploration (CMAE): agents share a common goal while exploring. The goal is selected from multiple projected state spaces via a normalized entropy-based technique. Then, agents are trained to reach this goal in a coordinated manner. We demonstrate that CMAE consistently outperforms baselines on various tasks, including a sparse-reward version of the multiple-particle environment (MPE) and the Starcraft multi-agent challenge (SMAC).
arXiv.org Artificial Intelligence
Jul-23-2021
- Country:
- North America > United States > Illinois (0.14)
- Genre:
- Research Report (0.64)
- Industry:
- Leisure & Entertainment > Games (0.66)
- Technology: