SMACv2: An Improved Benchmark for Cooperative Multi-Agent Reinforcement Learning

Neural Information Processing Systems 

The availability of challenging benchmarks has played a key role in the recent progress of machine learning. In cooperative multi-agent reinforcement learning, the StarCraft Multi-Agent Challenge (SMAC) has become a popular testbed for centralised training with decentralised execution. However, after years of sustained improvement on SMAC, algorithms now achieve near-perfect performance. In this work, we conduct new analysis demonstrating that SMAC lacks the stochasticity and partial observability to require complex policies. In particular, we show that an policy conditioned only on the timestep can achieve non-trivial win rates for many SMAC scenarios.