To drive AI forward, teach computers to play old-school text adventure games

#artificialintelligence

Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.


AlphaGo Wins Final Game In Match Against Champion Go Player

IEEE Spectrum Robotics

AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.


Could AlphaGo Bluff Its Way through Poker?

#artificialintelligence

One of the scientists responsible for AlphaGo, the Google DeepMind software that trounced one of the world's best Go players recently, says the same approach can produce a surprisingly competent poker bot. Unlike board games such as Go or chess, poker is a game of "imperfect information," and for this reason it has proved even more resistant to computerization than Go. Gameplay in poker involves devising a strategy based on the cards you have in your hand and a guess as to what's in your opponents' hands. Poker players try to read the behavior of others at the table using a combination of statistics and more subtle behavioral cues. Because of this, building an effective poker bot using machine learning may be significant for real-world applications of AI.


One of the world's most popular computer games will soon be open to many sophisticated AI players

#artificialintelligence

Teaching computers to play the board game Go is impressive, but if we really want to push the limits of machine intelligence, perhaps they'll need to learn to rush a Zerg or set a trap for a horde of invading Protoss ships. StarCraft, a hugely popular space-fiction-themed strategy computer game, will soon be accessible to advanced AI players. Blizzard Entertainment, the company behind the game, and Google DeepMind, a subsidiary of Alphabet focused on developing general-purpose artificial intelligence, announced the move at a games conference today. Teaching computers to play StarCraft II expertly would be a significant milestone in artificial-intelligence research. Within the game, players must build bases, mine resources, and attack their opponents' outposts.


One of the world's most popular computer games will soon be open to many sophisticated AI players

#artificialintelligence

Teaching computers to play the board game Go is impressive, but if we really want to push the limits of machine intelligence, perhaps they'll need to learn to rush a Zerg or set a trap for a horde of invading Protoss ships. StarCraft, a hugely popular space-fiction-themed strategy computer game, will soon be accessible to advanced AI players. Blizzard Entertainment, the company behind the game, and Google DeepMind, a subsidiary of Alphabet focused on developing general-purpose artificial intelligence, announced the move at a games conference today. Teaching computers to play StarCraft II expertly would be a significant milestone in artificial-intelligence research. Within the game, players must build bases, mine resources, and attack their opponents' outposts.