Games


To drive AI forward, teach computers to play old-school text adventure games

#artificialintelligence

Games have long been used as test beds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.


How Artificial Intelligence May Further Develop The Fast Growing Esports Industry

#artificialintelligence

A computer screen is photographed February 16, 1996 at IBM's headquarters in Armonk, New York, during IBM's supercomputer Deep Blue' s matches against world chess champion Garry Kasparov. The word "bot" is a name given to artificial intelligence (AI) that takes the place of player characters in online multiplayer video games. Some of the earliest examples include Perfect Dark on the Nintendo 64 system, which included the feature as a means of bypassing player limitations on such pre-Internet enabled consoles. AI has been a feature of video games since their inception way back in 1947 with one of the most significant examples of inclusion being with Deep Blue, a chess computer created by IBM which is notable for its ability to best the greatest minds in chess, including Garry Kasparov. However, AI is a concept that has somewhat been flipped on its head in recent years, as ground-breaking innovations such as machine learning and Internet of Things (IOT) have become vital instrumentalities in the corporate world.


Garry Kasparov Talks Artificial Intelligence, Deep Blue, And AlphaGo Zero

#artificialintelligence

Despite losing at chess to the IBM Deep Blue computer more than 20 years ago, Garry Kasparov is a big believer in artificial intelligence. The former world chess champion is now an author and speaker who is trying to counter some of the more alarmist beliefs over the rise of AI technologies, typically exemplified in Hollywood movies in which robots rise against their human creators. Speaking at the Train AI conference on Thursday in San Francisco, Kasparov explained how humanity has long considered people's performance in playing a game of chess as a metric of intelligence. "People looked at it as an opportunity to go deep in the human mind," he said of chess. That's why when Kasparov lost to Deep Blue in 1997 in a rematch from a prior match he won in 1996 -- which, he likes to note, "nobody remembers" -- people considered it a "watershed moment" for computer science.


Don't try and beat AI, merge with it says chess champ Garry Kasparov

#artificialintelligence

Garry Kasparov, a former Soviet world chess champion and one of the greatest players of all time, has changed his tune about AI since he was beaten by IBM's Deep Blue. During a talk at the Train AI conference in San Francisco on Thursday, Kasparov traced the steps that convinced him about how humans and machines might one day work together to create an "augmented intelligence". He's had a lot of time to contemplate the rise of machines. Over 20 years ago, at the height of his career as the world chess champion, he entered a competition to play chess against a supercomputer. "The day machines would beat the strongest human player had to be the dawn of AI.


Intelligent Machines Will Teach Us---Not Replace Us

WSJ.com: WSJD - Technology

We're not being replaced by AI. My chess loss in 1997 to IBM supercomputer Deep Blue was a victory for its human creators and mankind, not triumph of machine over man. In the same way, machine-generated insight adds to ours, extending our intelligence the way a telescope extends our vision. We aren't close to creating machines that think for themselves, with the awareness and self-determination that implies. Our machines are still entirely dependent on us to define every aspect of their capabilities and purpose, even as they master increasingly sophisticated tasks.


DeepMind's newest AI learns by itself and creates its own knowledge

#artificialintelligence

A couple of month's ago Google's Artificial Intelligence (AI) group, DeepMind, unveiled the latest incarnation of its Go playing program, AlphaGo Zero, an AI so powerful that it managed to cram thousands of years of human knowledge of playing the game, before inventing better moves of its own, into just three days. Hailed as a major breakthrough in AI learning because, unlike previous versions of AlphaGo, which went on to beat the world Go champion as well as take the Go online player community to the cleaners, AlphaGo Zero mastered the ancient Chinese board game from nothing more than a clean slate, with no more help from humans than being told the rules of the game. However, and as if that wasn't already impressive enough, it took its predecessor, AlphaGo, the AI that famously beat Lee Sedol, the South Korean grandmaster, to the cleaners as well, hammering it 100 games to nil. AlphaGo Zero's ability to learn for itself, and without human input, is a milestone on the road to one day realising Artificial General Intelligence (AGI), something that the same company, DeepMind, published an architecture for last year, and it will undoubtedly help us create the next generation of more "general" AI's that can do a lot more than just thrash humans at board games. AlphaGo Zero amassed its impressive skills using a technique called Reinforcement Learning, and at the heart of the program are a group of software "neurons" that are connected together to form a digital neural network.


Games console: the indie designer pouring his grief into interactive art

The Guardian

Scrolling through Twitter on his phone before going to sleep on 22 May 2017, Dan Hett saw a few vague mentions of an accident of some sort in Manchester: "no details, no actual news, just busybodies speculating." He rubbed his eyes, removed his glasses and lay down without thinking about it any further. It wasn't until he picked up his phone the following morning and saw hundreds of notifications that he realised something real had happened, that there had been an explosion, and that his brother Martyn was missing. "The messages, the ones you read … they were right, and you went to sleep," said a voice in his head. "You went to fucking sleep."


How poker and other games help artificial intelligence evolve

#artificialintelligence

When he was growing up in Ohio, his parents were avid card players, dealing out hands of everything from euchre to gin rummy. Meanwhile, he and his friends would tear up board games lying around the family home and combine the pieces to make their own games, with new challenges and new markers for victory. Bowling has come far from his days of playing with colourful cards and plastic dice. He has three degrees in computing science and is now a professor at the University of Alberta. But, in his heart, Bowling still loves playing games.


Teaching computers to play Doom is a blind alley for AI – here's an alternative

The Independent

Games have long been used as testbeds and benchmarks for artificial intelligence, and there has been no shortage of achievements in recent months. Google DeepMind's AlphaGo and poker bot Libratus from Carnegie Mellon University have both beaten human experts at games that have traditionally been hard for AI – some 20 years after IBM's DeepBlue achieved the same feat in chess. Games like these have the attraction of clearly defined rules; they are relatively simple and cheap for AI researchers to work with, and they provide a variety of cognitive challenges at any desired level of difficulty. By inventing algorithms that play them well, researchers hope to gain insights into the mechanisms needed to function autonomously. With the arrival of the latest techniques in AI and machine learning, attention is now shifting to visually detailed computer games – including the 3D shooter Doom, various 2D Atari games such as Pong and Space Invaders, and the real-time strategy game StarCraft.


A Popperian Falsification of Artificial Intelligence - Lighthill Defended

arXiv.org Artificial Intelligence

The area of computation called artificial intelligence (AI) is falsified by describing a previous 1972 falsification of AI by British applied mathematician James Lighthill. It is explained how Lighthill's arguments continue to apply to current AI. It is argued that AI should use the Popperian scientific method in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace or modify them. The paper describes the Popperian method in detail and discusses Paul Nurse's application of the method to cell biology that also involves questions of mechanism and behavior. Arguments used by Lighthill in his original 1972 report that falsified AI are discussed. The Lighthill arguments are then shown to apply to current AI. The argument uses recent scholarship to explain Lighthill's assumptions and to show how the arguments based on those assumptions continue to falsify modern AI. An important focus of the argument involves Hilbert's philosophical programme that defined knowledge and truth as provable formal sentences. Current AI takes the Hilbert programme as dogma beyond criticism while Lighthill as a mid 20th century applied mathematician had abandoned it. The paper uses recent scholarship to explain John von Neumann's criticism of AI that I claim was assumed by Lighthill. The paper discusses computer chess programs to show Lighthill's combinatorial explosion still applies to AI but not humans. An argument showing that Turing Machines (TM) are not the correct description of computation is given. The paper concludes by advocating studying computation as Peter Naur's Dataology.