AlphaGo, a largely self-taught Go-playing AI, last night won the fifth and final game in a match held in Seoul, South Korea, against that country's Lee Sedol. Sedol is one of the greatest modern players of the ancient Chinese game. The final score was 4 games to 1. Thus falls the last and computationally hardest game that programmers have taken as a test of machine intelligence. Chess, AI's original touchstone, fell to the machines 19 years ago, but Go had been expected to last for many years to come. The sweeping victory means far more than the US 1 million prize, which Google's London-based acquisition, DeepMind, says it will give to charity.
People have always had a fascination with robots – from Da Vinci's automatons in the 15th century right the way through to Channel 4's biggest drama hit in 20 years, Humans, which is back for a second season. The power and capability of robotics and computing is increasing at pace – Google Deepmind's AlphaGo programme finally beat Lee Sedol at the game'Go' earlier this year, something programmes have aimed for since IBM's Deep Blue beat chess champion Garry Kasparov in 1995. Our fascination at this progress is certainly tinged with fear. While it may make for excellent TV, we are not comfortable with the idea of machines coming for our jobs, our partners and world domination. The robot apocalypse is still (hopefully) a while off, so rather than hiding in a bunker and waiting for the end, we should instead look to the world of chess as an example of how to make the most of technology.
In March of last year, Google's (Menlo Park, California) artificial intelligence (AI) computer program AlphaGo beat the best Go player in the world, 18-time champion Lee Se-dol, in a tournament, winning 4 of 5 games.1 At first glance this news would seem of little interest to a pathologist, or to anyone else for that matter. After all, many will remember that IBM's (Armonk, New York) computer program Deep Blue beat Garry Kasparov--at the time the greatest chess player in the world--and that was 19 years ago. The rules of the several-thousand-year-old game of Go are extremely simple. The board consists of 19 horizontal and 19 vertical black lines.
Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
One of the scientists responsible for AlphaGo, the Google DeepMind software that trounced one of the world's best Go players recently, says the same approach can produce a surprisingly competent poker bot. Unlike board games such as Go or chess, poker is a game of "imperfect information," and for this reason it has proved even more resistant to computerization than Go. Gameplay in poker involves devising a strategy based on the cards you have in your hand and a guess as to what's in your opponents' hands. Poker players try to read the behavior of others at the table using a combination of statistics and more subtle behavioral cues. Because of this, building an effective poker bot using machine learning may be significant for real-world applications of AI.