DeepMind says this latest iteration of AlphaStar -- AlphaStar Final -- can play a full StarCraft 2 match under "professionally approved" conditions, importantly with limits on the frequency of its actions and by viewing the world through a game camera. It plays on the official StarCraft 2 Battle.net "StarCraft has been a grand challenge for AI researchers for over 15 years, so it's hugely exciting to see this work recognized in Nature," said DeepMind cofounder and CEO Demis Hassabis. "These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery." DeepMind's forays into competitive StarCraft play can be traced back to 2017, when the company worked with Blizzard to release an open source tool set containing anonymized match replays.
DeepMind has ambitions to solve some of the world's most complex problems using artificial intelligence. But first, it needs to get really good at StarCraft. After months of training, the Alphabet-owned AI firm's AlphaStar program is now capable of playing a full game of StarCraft II against a professional human player – and winning. It might sound frivolous, but mastering a game as complex as StarCraft is a major technological leap for DeepMind's AI brains. The company showed off AlphaStar in a livestream where the five agents created by the program were initially pitted against professional player Dario "TLO" Wünsch in a pre-recorded five-game series.
DeepMind's AlphaStar AI bot has reached Grandmaster level at StarCraft II, a popular battle strategy computer game, after ranking within the top 0.15 per cent of players in an online league. StarCraft II is a complex game and has a massive following with its own annual professional tournament - StarCraft II World Championship Series - that involves the best international teams competing over a prize pot over $2m. AlphaStar, however, isn't quite good enough to compete in that competition. Instead it set its eyes on a much smaller contest on Battle.net, the game's official online league hosted by China-friendly gaming biz Blizzard Entertainment. Researchers at Google-stablemate DeepMind entered its bot AlphaStar into a series of blind games, where its opponents had no idea it was playing against a computer.
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions1,2,3, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems4. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks5,6.
After playing benchmark matches back in December, DeepMind's StarCraft playing AI AlphaStar has beaten professional players in a series of games. Blizzard's StarCraft is complex E-sports game with no single winning strategy. It has its own AI in singleplayer mode, but it relies on hand crafted rules, having somewhat more information on the state of the map and its opponents than actual players, and being able to execute commands simultaneously, much faster than humans. Given its complexity, beating humans is considered another, huge milestone in AI research. But all other StarCraft AIs before relied mostly on a series of manually written rules and restrictions.