This AI Is Better at StarCraft II Than You'll Ever Be

#artificialintelligence

Research group Deep Mind's AlphaStar AI has reached Grandmaster rank in StarCraft II, the highest tier in the game. Only the best 200 StarCraft II players in each server region are attributed the rank, so it's a pretty incredible feat. It's not the first step AlphaStar has hit the news for its StarCraft II prowess either. Back in January, the AI beat two StarCraft II professionals in a live exhibition match. Rather than gifting the AI with processing speeds light years beyond that of a humble human as we've seen with other AI gaming experiments, the researchers at Deep Mind leveled the playing field.


British company DeepMind's AI beats pro gamers to achieve 'Grandmaster' status in StarCraft II

Daily Mail - Science & tech

An artificial intelligence developed by British firm DeepMind has achieved'Grandmaster' status in the real-time, sci-fi strategy game'StarCraft II'. StarCraft II is one of the world's most lucrative and popular esports, in which players control different alien races to build up forces and defeat their opponents. With each battle coming with thousands of possible moves at any given moment, the video game presents a challenge that surpasses traditional tests like chess or Go. The AI -- dubbed'AlphaStar' -- proved its mettle in a series of online battles against human opponents, coming out above 99.8 per cent of players in the rankings. This makes AlphaStar the first ever AI to reach the top tier of human performance in a professionally-played esport, without needing simplifying the game first.


Grandmaster level in StarCraft II using multi-agent reinforcement learning

#artificialintelligence

Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions1,2,3, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems4. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks5,6.


The Reinforcement-Learning Methods that Allow AlphaStar to Outcompete Almost All Human Players at StarCraft II - KDnuggets

#artificialintelligence

In January, artificial intelligence(AI) powerhouse DeepMind announced it had achieved a major milestone in its journey towards building AI systems that resemble human cognition. AlphaStar was a DeepMind agent designed using reinforcement learning that was able to beat two professional players at a game of StarCraft II, one of the most complex real-time strategy games of all time. During the last few months, DeepMind continued evolving AlphaStar to the point that the AI agent is now able to play a full game of StarCraft II at a Grandmaster level outranking 99.8% of human players. The results were recently published in Nature and they show some of the most advanced self-learning techniques used in modern AI systems. DeepMind's milestone is better explained by illustrating the trajectory from the first version of AlphaStar to the current one as well as some of the key challenges of StarCraft II.


An A.I. has beat humans at yet another of our own games

#artificialintelligence

Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged by consensus as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multiagent challenges. Over the course of a decade and numerous competitions 1–3, the best results have been made possible by hand-crafting major elements of the system, simplifying important aspects of the game, or using superhuman capabilities 4. Even with these modifications, no previous system has come close to rivalling the skill of top players in the full game. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counterstrategies, each represented by deep neural networks5,6. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.