DeepMind says this latest iteration of AlphaStar -- AlphaStar Final -- can play a full StarCraft 2 match under "professionally approved" conditions, importantly with limits on the frequency of its actions and by viewing the world through a game camera. It plays on the official StarCraft 2 Battle.net "StarCraft has been a grand challenge for AI researchers for over 15 years, so it's hugely exciting to see this work recognized in Nature," said DeepMind cofounder and CEO Demis Hassabis. "These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery." DeepMind's forays into competitive StarCraft play can be traced back to 2017, when the company worked with Blizzard to release an open source tool set containing anonymized match replays.
A major artificial intelligence milestone has been passed after an AI algorithm was able to defeat some of the world's best players at the real-time strategy game StarCraft II. Researchers at leading AI firm DeepMind developed a programme called AlphaStar capable of reaching the top eSport league for the popular video game, ranking among the top 0.2 per cent of all human players. A paper detailing the achievement, published in the scientific journal Nature, reveals how a technique called reinforcement learning allowed the algorithm to essentially teach itself effective strategies and counter-strategies. "The history of progress in artificial intelligence has been marked by milestone achievements in games. Ever since computers cracked Go, chess and poker, StarCraft has emerged by consensus as the next grand challenge," said David Silver, a principal research scientist at DeepMind.
DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab's more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature. Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game's playable races, adding to the complexity of the game at the upper echelons of pro play.
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged by consensus as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multiagent challenges. Over the course of a decade and numerous competitions 1–3, the best results have been made possible by hand-crafting major elements of the system, simplifying important aspects of the game, or using superhuman capabilities 4. Even with these modifications, no previous system has come close to rivalling the skill of top players in the full game. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counterstrategies, each represented by deep neural networks5,6. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.
An artificial intelligence (AI) system has reached the highest rank of StarCraft II, the fiendishly complex and wildly popular computer game, in a landmark achievement for the field. DeepMind's AlphaStar outperformed 99.8% of registered human players to attain grandmaster level at the game, which sees opponents build civilisations and battle their inventive, warmongering alien neighbours. The AI system mastered the game after 44 days of training, which involved learning from recordings of the best human players and then going up against itself and versions of the programme that intentionally tested its weaknesses. "AlphaStar has become the first AI system to reach the top tier of human performance in any professionally played e-sport on the full unrestricted game under professionally approved conditions," said David Silver, a researcher at DeepMind. More than $31m in prize money has been handed out from thousands of StarCraft II e-sport tournaments since the game was released in 2010.