Online chess game lets you see what the computer is thinking

#artificialintelligence

Artificial intelligence has shown what it can do when facing off against humans in ancient board games, with Deep Blue and Alpha Go already proving their worth on the world stage. While computers playing chess is nothing new, an online version of the ancient game lifts the veil of AI to let players see what the AI is thinking. You make your move and then see the computer come to life, calculating thousands of possible counter moves. Thinking Machine 6 is an AI-based concept art piece created by Martin Wattenberg. Rather than making players into chess champions, it shows the AI thinking process.


One of the world's most popular computer games will soon be open to many sophisticated AI players

#artificialintelligence

Teaching computers to play the board game Go is impressive, but if we really want to push the limits of machine intelligence, perhaps they'll need to learn to rush a Zerg or set a trap for a horde of invading Protoss ships. StarCraft, a hugely popular space-fiction-themed strategy computer game, will soon be accessible to advanced AI players. Blizzard Entertainment, the company behind the game, and Google DeepMind, a subsidiary of Alphabet focused on developing general-purpose artificial intelligence, announced the move at a games conference today. Teaching computers to play StarCraft II expertly would be a significant milestone in artificial-intelligence research. Within the game, players must build bases, mine resources, and attack their opponents' outposts.


Why football, not chess, is the true final frontier for robotic artificial intelligence

#artificialintelligence

First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move's quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms. The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind's AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here? Following Kasparov's defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game.


Why football, not chess, is the true final frontier for robotic artificial intelligence

#artificialintelligence

First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move's quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms. The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind's AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here? Following Kasparov's defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game.


Why football, not chess, is the true final frontier for robotic artificial intelligence

#artificialintelligence

First was the Monte Carlo tree search, an algorithm that rather than attempting to examine all possible future moves instead tests a sparse selection of them, combining their value in a sophisticated way to get a better estimate of a move's quality. The second was the (re)discovery of deep networks, a contemporary incarnation of neural networks that had been experimented with since the 1960s, but which was now cheaper, more powerful, and equipped with huge amounts of data with which to train the learning algorithms. The combination of these techniques saw a drastic improvement in Go-playing programs, and ultimately Google DeepMind's AlphaGo program beat Go world champion Lee Sedol in March 2016. Now that Go has fallen, where do we go from here? Following Kasparov's defeat in 1997, scientists considered that the challenge for AI was not to conquer some cerebral game.