Given the challenges that ordinary human beings encounter when mastering such games, a natural focus in Artificial Intelligence (AI) research is to build systems that can achieve the same level of game-playing performance as a Grand Master.
'Mental' games, such as Chess, Checkers and Go, are staples in every known culture in human history, from the ancient Egyptians to the Chinese. Mastery in such games requires formidable strategic skills that rely on a combination of intelligence, practice, intuition, and decision-making under uncertainty. Often, decisions ('moves' in game terminology) have to be made under constraints of time.
Building programs that could play complex games has a long history in AI research. Early, extremely influential examples, may be found in the work of such giants as Newell, Shaw and Simon, who first identified mastery in chess as an important indication of progress in building intelligent systems. Another game that witnessed breakthrough AI research, especially in the 1980s and 1990s, was Backgammon.
Fast-forwarding, in 1997, IBM's Deep Blue went down in history as the computer that narrowly beat then-reigning World Champion, Gary Kasparov, at Chess. In our own time, Google's AlphaGo has rocked the news for beating the reigning (human) World Champion, Lee Sedol, in the ancient game of Go in a best-of-five series of publicly broadcast matches. Even more recently, an AI called Libratus out-bluffed masterful human beings at Poker. Going beyond games of skill, a few years ago, IBM's Watson made the news for beating human players at the trivia game Jeopardy!, demonstrating that AI programs are becoming more proficient at understanding natural languages like English. In the years since then, AI-based conversational systems like Siri, Alexa and Cortana have become stapes in phones and computers. Some form of AI is even integrated into Barbie dolls and many cars currently on the street. The day may not be far when driverless cars are the norm.
Given the brief unfolding history above on AI and games, it is not unreasonable to say (albeit at the risk of some simplification) that many milestones in AI research are marked by the achievement of super-human performance in a particular game, such as Chess, that has withstood the twin tests of time and space.
Importantly, the same techniques used to build game-playing AIs are also being used to revolutionize entire fields, such as space exploration and medical research, traditionally considered separate from core Computer Science. Wouldn't it be cool to build an AI system that can beat a Grand Master in your favorite fame and that helps humankind find a cure for cancer (and explore Saturn) at the same time?
Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
A computer screen is photographed February 16, 1996 at IBM's headquarters in Armonk, New York, during IBM's supercomputer Deep Blue' s matches against world chess champion Garry Kasparov. The word "bot" is a name given to artificial intelligence (AI) that takes the place of player characters in online multiplayer video games. Some of the earliest examples include Perfect Dark on the Nintendo 64 system, which included the feature as a means of bypassing player limitations on such pre-Internet enabled consoles. AI has been a feature of video games since their inception way back in 1947 with one of the most significant examples of inclusion being with Deep Blue, a chess computer created by IBM which is notable for its ability to best the greatest minds in chess, including Garry Kasparov. However, AI is a concept that has somewhat been flipped on its head in recent years, as ground-breaking innovations such as machine learning and Internet of Things (IOT) have become vital instrumentalities in the corporate world.
Despite losing at chess to the IBM Deep Blue computer more than 20 years ago, Garry Kasparov is a big believer in artificial intelligence. The former world chess champion is now an author and speaker who is trying to counter some of the more alarmist beliefs over the rise of AI technologies, typically exemplified in Hollywood movies in which robots rise against their human creators. Speaking at the Train AI conference on Thursday in San Francisco, Kasparov explained how humanity has long considered people's performance in playing a game of chess as a metric of intelligence. "People looked at it as an opportunity to go deep in the human mind," he said of chess. That's why when Kasparov lost to Deep Blue in 1997 in a rematch from a prior match he won in 1996 -- which, he likes to note, "nobody remembers" -- people considered it a "watershed moment" for computer science.
Garry Kasparov, a former Soviet world chess champion and one of the greatest players of all time, has changed his tune about AI since he was beaten by IBM's Deep Blue. During a talk at the Train AI conference in San Francisco on Thursday, Kasparov traced the steps that convinced him about how humans and machines might one day work together to create an "augmented intelligence". He's had a lot of time to contemplate the rise of machines. Over 20 years ago, at the height of his career as the world chess champion, he entered a competition to play chess against a supercomputer. "The day machines would beat the strongest human player had to be the dawn of AI.
We're not being replaced by AI. My chess loss in 1997 to IBM supercomputer Deep Blue was a victory for its human creators and mankind, not triumph of machine over man. In the same way, machine-generated insight adds to ours, extending our intelligence the way a telescope extends our vision. We aren't close to creating machines that think for themselves, with the awareness and self-determination that implies. Our machines are still entirely dependent on us to define every aspect of their capabilities and purpose, even as they master increasingly sophisticated tasks.
A couple of month's ago Google's Artificial Intelligence (AI) group, DeepMind, unveiled the latest incarnation of its Go playing program, AlphaGo Zero, an AI so powerful that it managed to cram thousands of years of human knowledge of playing the game, before inventing better moves of its own, into just three days. Hailed as a major breakthrough in AI learning because, unlike previous versions of AlphaGo, which went on to beat the world Go champion as well as take the Go online player community to the cleaners, AlphaGo Zero mastered the ancient Chinese board game from nothing more than a clean slate, with no more help from humans than being told the rules of the game. However, and as if that wasn't already impressive enough, it took its predecessor, AlphaGo, the AI that famously beat Lee Sedol, the South Korean grandmaster, to the cleaners as well, hammering it 100 games to nil. AlphaGo Zero's ability to learn for itself, and without human input, is a milestone on the road to one day realising Artificial General Intelligence (AGI), something that the same company, DeepMind, published an architecture for last year, and it will undoubtedly help us create the next generation of more "general" AI's that can do a lot more than just thrash humans at board games. AlphaGo Zero amassed its impressive skills using a technique called Reinforcement Learning, and at the heart of the program are a group of software "neurons" that are connected together to form a digital neural network.
Scrolling through Twitter on his phone before going to sleep on 22 May 2017, Dan Hett saw a few vague mentions of an accident of some sort in Manchester: "no details, no actual news, just busybodies speculating." He rubbed his eyes, removed his glasses and lay down without thinking about it any further. It wasn't until he picked up his phone the following morning and saw hundreds of notifications that he realised something real had happened, that there had been an explosion, and that his brother Martyn was missing. "The messages, the ones you read … they were right, and you went to sleep," said a voice in his head. "You went to fucking sleep."
When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta. But, in his heart, Bowling still loves playing games.
Games have long been used as testbeds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.
The area of computation called artificial intelligence (AI) is falsified by describing a previous 1972 falsification of AI by British applied mathematician James Lighthill. It is explained how Lighthill's arguments continue to apply to current AI. It is argued that AI should use the Popperian scientific method in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace or modify them. The paper describes the Popperian method in detail and discusses Paul Nurse's application of the method to cell biology that also involves questions of mechanism and behavior. Arguments used by Lighthill in his original 1972 report that falsified AI are discussed. The Lighthill arguments are then shown to apply to current AI. The argument uses recent scholarship to explain Lighthill's assumptions and to show how the arguments based on those assumptions continue to falsify modern AI. An important focus of the argument involves Hilbert's philosophical programme that defined knowledge and truth as provable formal sentences. Current AI takes the Hilbert programme as dogma beyond criticism while Lighthill as a mid 20th century applied mathematician had abandoned it. The paper uses recent scholarship to explain John von Neumann's criticism of AI that I claim was assumed by Lighthill. The paper discusses computer chess programs to show Lighthill's combinatorial explosion still applies to AI but not humans. An argument showing that Turing Machines (TM) are not the correct description of computation is given. The paper concludes by advocating studying computation as Peter Naur's Dataology.