Given the challenges that ordinary human beings encounter when mastering such games, a natural focus in Artificial Intelligence (AI) research is to build systems that can achieve the same level of game-playing performance as a Grand Master.
'Mental' games, such as Chess, Checkers and Go, are staples in every known culture in human history, from the ancient Egyptians to the Chinese. Mastery in such games requires formidable strategic skills that rely on a combination of intelligence, practice, intuition, and decision-making under uncertainty. Often, decisions ('moves' in game terminology) have to be made under constraints of time.
Building programs that could play complex games has a long history in AI research. Early, extremely influential examples, may be found in the work of such giants as Newell, Shaw and Simon, who first identified mastery in chess as an important indication of progress in building intelligent systems. Another game that witnessed breakthrough AI research, especially in the 1980s and 1990s, was Backgammon.
Fast-forwarding, in 1997, IBM's Deep Blue went down in history as the computer that narrowly beat then-reigning World Champion, Gary Kasparov, at Chess. In our own time, Google's AlphaGo has rocked the news for beating the reigning (human) World Champion, Lee Sedol, in the ancient game of Go in a best-of-five series of publicly broadcast matches. Even more recently, an AI called Libratus out-bluffed masterful human beings at Poker. Going beyond games of skill, a few years ago, IBM's Watson made the news for beating human players at the trivia game Jeopardy!, demonstrating that AI programs are becoming more proficient at understanding natural languages like English. In the years since then, AI-based conversational systems like Siri, Alexa and Cortana have become stapes in phones and computers. Some form of AI is even integrated into Barbie dolls and many cars currently on the street. The day may not be far when driverless cars are the norm.
Given the brief unfolding history above on AI and games, it is not unreasonable to say (albeit at the risk of some simplification) that many milestones in AI research are marked by the achievement of super-human performance in a particular game, such as Chess, that has withstood the twin tests of time and space.
Importantly, the same techniques used to build game-playing AIs are also being used to revolutionize entire fields, such as space exploration and medical research, traditionally considered separate from core Computer Science. Wouldn't it be cool to build an AI system that can beat a Grand Master in your favorite fame and that helps humankind find a cure for cancer (and explore Saturn) at the same time?
The full instructions are here, and a sample game is here. AIs are now better than humans at Backgammon, Checkers, Chess, Othello, and Go. See Audrey Keurenkov's A'Brief' History of Game AI Up to AlphaGo for a more in-depth timeline. In 2017, Michael Tucker, Nikhil Prabala, and I set out to create PAI, the world's first AI for Pathwayz. The AIs for Othello and Backgammon were especially relevant to our development of PAI. Othello, like Pathwayz, is a relatively young game -- at least compared to the ancient Backgammon, Checkers, Chess, and Go.
IN 2016 Lee Sedol, one of the world's best players of Go, lost a match in Seoul to a computer program called AlphaGo by four games to one. It was a big event, both in the history of Go and in the history of artificial intelligence (AI). Go occupies roughly the same place in the culture of China, Korea and Japan as chess does in the West. After its victory over Mr Lee, AlphaGo beat dozens of renowned human players in a series of anonymous games played online, before re-emerging in May to face Ke Jie, the game's best player, in Wuzhen, China. Mr Ke fared no better than Mr Lee, losing to the computer 3-0.
It's been a little more than 20 years since IBM's Deep Blue computer beat chess champion Garry Kasparov in a six-game match. Since that time, artificial intelligence -- also known as machine intelligence -- has achieved an unimaginable level of breadth, depth and speed. To say it is revolutionizing our daily lives is an understatement. Alexa, Siri, AlphaGo; Amazon's suggestions of books of the same genre based on algorithms of previous reading preferences; the detection of malware; machines that signal to a central data bank when materials fatigue puts an engine at risk -- these are just some of the breakthroughs in artificial intelligence. While artificial intelligence is getting better -- smarter at increasingly complex and cognitively demanding tasks -- there are still many areas where humans excel and that includes creative tasks or those that require physical dexterity, but these are rapidly shrinking, given the comparative advantages of AI over the human mind.
Recently Google DeepMind program AlphaGo Zero achieved superhuman level without any help - entirely by self-play! Here is the Nature paper explaining technical details (also PDF version: Mastering the Game of Go without Human Knowledge) One of the main reasons for success was the use of a novel form of Reinforcement learning in which AlphaGo learned by playing itself. The system starts with a neural net that does not know anything about Go. It plays millions of games against itself and tuned the neural network to predict next move and the eventual winner of the games. The updated neural network was merged with the Monte Carlo Tree Search algorithm to create a new and stronger version of AlphaGo Zero, and the process resumed.
AI has a long history of defeating human players in games. IBM's "Deep Blue" developed by Carnegie Mellon University beat chess world champion Garry Kasparov in their re-match in 1997. Google AlphaGo AI won the game "Go" by defeating leading Go player Lee Sedol. IBM supercomputer Watson beat two "Jeopardy" champions at their own game in 2011. But, did you know that AI recently conquered the very human game of Poker?
Animation on your phone on the fly, or getting Alexa to play a musical compilation – those are just two examples of how we use artificial intelligence (AI) in our day-to-day lives, whether we know it or not. But AI is much more than that. AI is all about training computers by example, rather than programming. UK Prime Minister, Theresa May, speaking at Davos last month, said the UK could be a world leader in AI.
News of a specialized computer program beating human champions at games like chess and Go might not surprise people as much as it might have before, as it did when Deep Blue beat world chess champ Garry Kasparov back in 1997, or even more recently when Google DeepMind's AI AlphaGo beat Lee Sedol in a stunning upset back in 2016.