Goto

Collaborating Authors

A.I. players master 'Quake III Arena,' manage to outperform humans

#artificialintelligence

Those among us who fear that we've already passed the point of no return when it comes to artificial intelligence becoming self-aware and plotting to murder the human race will likely cite A.I. research company DeepMind's latest experiment as further proof of that notion. Using Id Software's Quake III Arena, DeepMind has managed to train artificial players to be even more effective than their human counterparts. The challenge for DeepMind was not to see if its A.I. agents could defeat human players in battle, but rather if they could work together on procedurally generated levels to complete an objective -- in this case, capture the flag. Because the levels' structure changes each time they play, the agents are unable to simply memorize locations in order to make it to the flag. This forced them to actually learn the strategies needed to win in a similar manner to how human players might improve at the game.


AI Dominates Human Professional Players in StarCraft II

#artificialintelligence

An artificial intelligence has defeated two top-ranked human players in the computer game StarCraft II, using some strategies rarely encountered before. On Thursday, gamers were able to watch the AI agent, called AlphaStar, expertly command armies of "Protoss" units against the professional players. The result: The AI beat the humans 10 out of the 11 matches. "I was surprised by how strong the agent was," said Dario "TLO" Wünsch, one of the human players. "AlphaStar takes well-known strategies and turns them on their head."


StarCraft Pros Are Ready to Battle AI

MIT Technology Review

Message from the world's best StarCraft players to the world's most advanced AI: bring it on. The space-war computer game is widely regarded as the ultimate challenge for AI programs due to its complexity and rapid pace. Expectations for a match-up between a professional StarCraft player and sophisticated AI ratcheted up last year after an AI program beat a highly ranked human player at Go, one of the world's most difficult board games. At the time, a number of AI experts pointed to StarCraft as the next target for an AI-versus-man showdown. Among them: Demis Hassabis, the founder and CEO of DeepMind, the AI-focused division of Alphabet that created the triumphant Go-playing AI program, AlphaGo.


Computers Beat Humans at Poker. Next Up: Everything Else? - Facts So Romantic

Nautilus

Over the span of 20 days early this year, artificial intelligence encountered a major test of how well it can tackle problems in the real world. A program called Libratus took on four of the best poker players in the country, at a tournament at the Rivers Casino in Pittsburgh, Pennsylvania. They were playing a form of poker called heads-up no-limit Texas hold'em, where two players face off, often online, in a long series of hands, testing each other's strategies, refining their own, and bluffing like mad. After 120,000 hands, Libratus emerged with an overwhelming victory over all four opponents, winning $1,776,250 of simulated money and, more importantly, bragging rights as arguably the best poker player on the planet. Just halfway through the competition, Dong Kim, the human player who fared best against the machine, all but admitted defeat.


DeepMind's Agent57 AI agent can best human players across a suite of 57 Atari games – TechCrunch

#artificialintelligence

Development of artificial intelligence agents tends to frequently be measured by their performance in games, but there's a good reason for that: Games tend to offer a wide proficiency curve, in terms of being relatively simple to grasp the basics, but difficult to master, and they almost always have a built-in scoring system to evaluate performance. DeepMind's agents have tackled board game Go, as well as real-time strategy video game StarCraft. But the Alphabet company's most recent feat is Agent57, a learning agent that can beat the average human on each of 57 Atari games with a wide range of difficulty, characteristics and gameplay styles. Being better than humans at 57 Atari games may seem like an odd benchmark against which to measure the performance of a deep learning agent, but it's actually a standard that goes all the way back to 2012, with a selection of Atari classics including Pitfall, Solaris, Montezuma's Revenge and many others. Taken together, these games represent a broad range of difficulty levels, as well as requiring a range of different strategies in order to achieve success.