Knowledge that Everyone Knows. "People do not walk on their heads." The assertion comes about 900 statements deep into the 527,308 items that comprise the Open Mind common sense database. It's after "Laws are the rules of society" and before "The sky is blue during the day." This collection of mundane facts, which would take more than 20,000 pages to print out, consists entirely of statements so unremarkable they are barely worth stating. Most of us would correctly dismiss them as common sense.
– from D.C. Denison, Guess who's smarter. Boston Globe Online (page hosted at MIT), May 26, 2003.
McCarthy, a past president of AAAI and an AAAI Fellow, helped design the foundation of today's internet-based computing and is widely credited with coining the term, artificial intelligence. This remembrance by Edward Feigenbaum, also a past president of AAAI and a professor emeritus of computer science at Stanford University, was delivered at the celebration of John McCarthy's accomplishments, held at Stanford on 25 March 2012. Everyone knew everyone else, and saw them at the few conference panels that were held. At one of those conferences, I met John. We renewed contact upon his rearrival at Stanford, and that was to have major consequences for my professional life.
The Winograd Schema Challenge (WSC) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Chief among its features is a simple question format that can span many commonsense knowledge domains. Questions are chosen so that they do not require specialized knoweldge or training and are easy for humans to answer. This article details our plans to run the WSC and evaluate results. Turing (1950) had first introduced the notion of testing a computer system's intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer.
The recent history of expert systems, for example, highlights how constricting the brittleness and knowledge acquisition bottlenecks are. Moreover, standard software methodology (e.g., working from a detailed "spec") has proven of little use in AI, a field which by definition tackles ill-structured problems. How can these bottlenecks be widened? Attractive, elegant answers have included machine learning, automatic programming, and natural language understanding. But decades of work on such systems (Green et al., 1974; Lenat et al., 1983; Lenat & Brown, 1984; Schank & Abelson, 1977) have convinced us that each of these approaches has difficulty "scaling up" for want of a substantial base of real world knowledge.
The articles in this special issue of AI Magazine include those that propose specific tests and those that look at the challenges inherent in building robust, valid, and reliable tests for advancing the state of the art in AI. To people outside the field, the test -- which hinges on the ability of machines to fool people into thinking that they (the machines) are people -- is practically synonymous with the quest to create machine intelligence. Within the field, the test is widely recognized as a pioneering landmark, but also is now seen as a distraction, designed over half a century ago, and too crude to really measure intelligence. Intelligence is, after all, a multidimensional variable, and no one test could possibly ever be definitive truly to measure it. Moreover, the original test, at least in its standard implementations, has turned out to be highly gameable, arguably an exercise in deception rather than a true measure of anything especially correlated with intelligence.
Osborn, Joseph C. (University of California, Santa Cruz) | Samuel, Ben (University of New Orleans) | Summerville, Adam (University of California, Santa Cruz) | Mateas, Michael (University of California, Santa Cruz)
General videogame playing has come a long way in a short period of time, but remains at the level of solving relatively short games made up of distinct and isolated episodes. Even simple console role-playing games (RPGs) are far beyond the reach of current techniques, requiring the synthesis of cultural knowledge with compositional reasoning over several interconnected sub-games. We explore how the challenges of playing these games could spark new advances in compositional analysis of games and common-sense reasoning. General RPG playing can leverage advances in episodic general game playing and in areas like text understanding, image classification, and automated game design learning. It has direct applications in design support and AI-based game design, and the techniques used to enable it could generalize to other families of games such as adventure, open-world, and simulation games. In this paper, we describe the motivation behind general RPG playing in a sub-domain of Nintendo Entertainment System (NES) RPGs, some promising approaches to some of its fundamental issues, and immediate next steps; we conclude by describing a few concrete benchmark problems on the path towards automated play of these complex games.
It's also home to Alexa, the voice assistant which powers the $179 Echo and Echo dot gadgets. Amazon's machine learning boss (and founder of Amazon Research Cambridge) Professor Neil Lawrence, yesterday discussed the ethics of using our voices to train computers. But when quizzed on whether new starters would be offered specific ethics training by the Sun, Lawrence said that those in control of Amazon's machines were only trained in "information security." "The problems we solve in the Alexa Knowledge team in Cambridge help Alexa get smarter by understanding the different ways people talk, by learning more and more facts about the world, by improving her common sense reasoning and by responding in the most natural way possible in multiple languages."