Collaborating Authors

DeepMind's AlphaStar Final beats 99.8% of human StarCraft 2 players


DeepMind says this latest iteration of AlphaStar -- AlphaStar Final -- can play a full StarCraft 2 match under "professionally approved" conditions, importantly with limits on the frequency of its actions and by viewing the world through a game camera. It plays on the official StarCraft 2 "StarCraft has been a grand challenge for AI researchers for over 15 years, so it's hugely exciting to see this work recognized in Nature," said DeepMind cofounder and CEO Demis Hassabis. "These impressive results mark an important step forward in our mission to create intelligent systems that will accelerate scientific discovery." DeepMind's forays into competitive StarCraft play can be traced back to 2017, when the company worked with Blizzard to release an open source tool set containing anonymized match replays.

Thought you were good at StarCraft? DeepMind's AI bot proves better than 99.8% of fleshy humans


DeepMind's AlphaStar AI bot has reached Grandmaster level at StarCraft II, a popular battle strategy computer game, after ranking within the top 0.15 per cent of players in an online league. StarCraft II is a complex game and has a massive following with its own annual professional tournament - StarCraft II World Championship Series - that involves the best international teams competing over a prize pot over $2m. AlphaStar, however, isn't quite good enough to compete in that competition. Instead it set its eyes on a much smaller contest on, the game's official online league hosted by China-friendly gaming biz Blizzard Entertainment. Researchers at Google-stablemate DeepMind entered its bot AlphaStar into a series of blind games, where its opponents had no idea it was playing against a computer.

AI Dominates Human Professional Players in StarCraft II


An artificial intelligence has defeated two top-ranked human players in the computer game StarCraft II, using some strategies rarely encountered before. On Thursday, gamers were able to watch the AI agent, called AlphaStar, expertly command armies of "Protoss" units against the professional players. The result: The AI beat the humans 10 out of the 11 matches. "I was surprised by how strong the agent was," said Dario "TLO" Wünsch, one of the human players. "AlphaStar takes well-known strategies and turns them on their head."

This is how Google's DeepMind crushed puny humans at StarCraft


DeepMind has ambitions to solve some of the world's most complex problems using artificial intelligence. But first, it needs to get really good at StarCraft. After months of training, the Alphabet-owned AI firm's AlphaStar program is now capable of playing a full game of StarCraft II against a professional human player – and winning. It might sound frivolous, but mastering a game as complex as StarCraft is a major technological leap for DeepMind's AI brains. The company showed off AlphaStar in a livestream where the five agents created by the program were initially pitted against professional player Dario "TLO" Wünsch in a pre-recorded five-game series.

Grandmaster level in StarCraft II using multi-agent reinforcement learning


Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions1,2,3, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems4. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks5,6.