Commonsense Reasoning


McCarthy as Scientist and Engineer, with Personal Recollections

AI Magazine

McCarthy, a past president of AAAI and an AAAI Fellow, helped design the foundation of today's internet-based computing and is widely credited with coining the term, artificial intelligence. This remembrance by Edward Feigenbaum, also a past president of AAAI and a professor emeritus of computer science at Stanford University, was delivered at the celebration of John McCarthy's accomplishments, held at Stanford on 25 March 2012. Everyone knew everyone else, and saw them at the few conference panels that were held. At one of those conferences, I met John. We renewed contact upon his rearrival at Stanford, and that was to have major consequences for my professional life.


Planning, Executing, and Evaluating the Winograd Schema Challenge

AI Magazine

The Winograd Schema Challenge (WSC) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Chief among its features is a simple question format that can span many commonsense knowledge domains. Questions are chosen so that they do not require specialized knoweldge or training and are easy for humans to answer. This article details our plans to run the WSC and evaluate results. Turing (1950) had first introduced the notion of testing a computer system's intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer.


CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks

AI Magazine

The recent history of expert systems, for example, highlights how constricting the brittleness and knowledge acquisition bottlenecks are. Moreover, standard software methodology (e.g., working from a detailed "spec") has proven of little use in AI, a field which by definition tackles ill-structured problems. How can these bottlenecks be widened? Attractive, elegant answers have included machine learning, automatic programming, and natural language understanding. But decades of work on such systems (Green et al., 1974; Lenat et al., 1983; Lenat & Brown, 1984; Schank & Abelson, 1977) have convinced us that each of these approaches has difficulty "scaling up" for want of a substantial base of real world knowledge.


Editorial Introduction to the Special Articles in the Spring Issue

AI Magazine

The articles in this special issue of AI Magazine include those that propose specific tests and those that look at the challenges inherent in building robust, valid, and reliable tests for advancing the state of the art in AI. To people outside the field, the test -- which hinges on the ability of machines to fool people into thinking that they (the machines) are people -- is practically synonymous with the quest to create machine intelligence. Within the field, the test is widely recognized as a pioneering landmark, but also is now seen as a distraction, designed over half a century ago, and too crude to really measure intelligence. Intelligence is, after all, a multidimensional variable, and no one test could possibly ever be definitive truly to measure it. Moreover, the original test, at least in its standard implementations, has turned out to be highly gameable, arguably an exercise in deception rather than a true measure of anything especially correlated with intelligence.



[P] Interactive demo of a neural coreference resolution SOTA model open-source code • r/MachineLearning

@machinelearnbot

Coreference resolution is a very challenging NLP task in which you try to link mentions with real life entities. It is the basis of the Winograd Schema Challenge, a test designed to defeat the AIs who've beaten the Turing Test! Hope you like it, I definitely think there should be more interactive demo of NLP systems like this!


Amazon is on the hunt for mad scientists

#artificialintelligence

It's also home to Alexa, the voice assistant which powers the $179 Echo and Echo dot gadgets. Amazon's machine learning boss (and founder of Amazon Research Cambridge) Professor Neil Lawrence, yesterday discussed the ethics of using our voices to train computers. But when quizzed on whether new starters would be offered specific ethics training by the Sun, Lawrence said that those in control of Amazon's machines were only trained in "information security." "The problems we solve in the Alexa Knowledge team in Cambridge help Alexa get smarter by understanding the different ways people talk, by learning more and more facts about the world, by improving her common sense reasoning and by responding in the most natural way possible in multiple languages."


IBM: Response to RFI

#artificialintelligence

In order for AI systems to enhance quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. In order for AI systems to enhance humans' quality of life, both personally and professionally, they must acquire broad and deep knowledge from multiple domains, learn continuously from interactions with people and environments, and support reasoned decisions. In particular, unsupervised learning capabilities are needed to provide AI systems with common sense reasoning, methods should be developed to avoid bias and specificity in data sets, AI algorithms should be transparent and interpretable, and should be able to interact with humans in natural ways. The AI field's long-term progress depend upon many advances, including the following ones: Machine learning and reasoning: Most current AI systems use supervised learning, using massive amounts of labeled data for training.


Google's chatbot discusses the meaning of life

Daily Mail

Google researchers have developed a chatbot that can carry out a natural conversation with a human, even demonstrating common sense reasoning. Google researchers have developed a chatbot that can carry out a natural conversation with a human, even demonstrating common sense reasoning. When presented with'issues accessing vpn,' for example, the machine asked questions about the operating systems in question and the error message to eventually come to the right answer. 'We find that this straightforward model can generate simple conversations given a large conversational training dataset.


Logic and Artificial Intelligence (Stanford Encyclopedia of Philosophy)

AITopics Original Links

Artificial Intelligence (which I'll refer to hereafter by its nickname, "AI") is the subfield of Computer Science devoted to developing programs that enable computers to display behavior that can (broadly) be characterized as intelligent.[1] Most research in AI is devoted to fairly narrow applications, such as planning or speech-to-speech translation in limited, well defined task domains. But substantial interest remains in the long-range goal of building generally intelligent, autonomous agents,[2] even if the goal of fully human-like intelligence is elusive and is seldom pursued explicitly and as such. Throughout its relatively short history, AI has been heavily influenced by logical ideas. AI has drawn on many research methodologies: the value and relative importance of logical formalisms is questioned by some leading practitioners, and has been debated in the literature from time to time.[3]