Commonsense Reasoning


Logical Semantics and Commonsense Knowledge: Where Did we Go Wrong, and How to Go Forward, Again

arXiv.org Artificial Intelligence

We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types. We will then show that accounting for these differences amounts to the integration of lexical and compositional semantics in one coherent framework, and to an embedding in our logical semantics of a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. We will show that in such a framework a number of challenges in natural language semantics can be adequately and systematically treated.


Facebook's AI arm explains its investment in robotics ZDNet

#artificialintelligence

Facebook on Tuesday officially announced that it's hired some of academia's top AI researchers, defending its practice of drawing talent from universities around the globe. Facebook AI Research (FAIR) "relies on open partnerships to help drive AI forward, where researchers have the freedom to control their own agenda," Facebook Chief AI Scientist Yann LeCun wrote in a blog post. "Ours frequently collaborate with academics from other institutions, and we often provide financial and hardware resources to specific universities. The latest hires include Carnegie Mellon Prof. Jessica Hodgins, who will lead a new FAIR lab in Pittsburgh focused on robotics, large-scale and lifelong learning, common sense reasoning, and AI in support of creativity. She'll be joined by Carnegie Mellon Prof. Abhinav Gupta, another robotics expert.


Facebook's AI arm explains its investment in robotics

ZDNet

Newell-Simon Hall at Carnegie Mellon University is home to the Robotics Institute and the Human-Computer Interaction Institute. Facebook on Tuesday officially announced that it's hired some of academia's top AI researchers, defending its practice of drawing talent from universities around the globe. Facebook AI Research (FAIR) "relies on open partnerships to help drive AI forward, where researchers have the freedom to control their own agenda," Facebook Chief AI Scientist Yann LeCun wrote in a blog post. "Ours frequently collaborate with academics from other institutions, and we often provide financial and hardware resources to specific universities. The latest hires include Carnegie Mellon Prof. Jessica Hodgins, who will lead a new FAIR lab in Pittsburgh focused on robotics, large-scale and lifelong learning, common sense reasoning, and AI in support of creativity.


A Simple Method for Commonsense Reasoning

arXiv.org Artificial Intelligence

Commonsense reasoning is a long-standing challenge for deep learning. For example, it is difficult to use neural networks to tackle the Winograd Schema dataset~\cite{levesque2011winograd}. In this paper, we present a simple method for commonsense reasoning with neural networks, using unsupervised learning. Key to our method is the use of language models, trained on a massive amount of unlabled data, to score multiple choice questions posed by commonsense reasoning tests. On both Pronoun Disambiguation and Winograd Schema challenges, our models outperform previous state-of-the-art methods by a large margin, without using expensive annotated knowledge bases or hand-engineered features. We train an array of large RNN language models that operate at word or character level on LM-1-Billion, CommonCrawl, SQuAD, Gutenberg Books, and a customized corpus for this task and show that diversity of training data plays an important role in test performance. Further analysis also shows that our system successfully discovers important features of the context that decide the correct answer, indicating a good grasp of commonsense knowledge.


How could businesses use AI in the future?

#artificialintelligence

"We know it's going to impact how we run our business and it's certainly going to impact our customers. Dr Adrian Weller, programme director of AI at the Alan Turing Institute, is also enthusiastic for what the future holds. "Looking ahead, there are many exciting technical challenges that could really help us take AI to the next level," he says. "A grand challenge will be how we can try to introduce common sense reasoning to enable a whole host of new applications." "I think we will be challenged in terms of the traditional ways of doing things, and these technologies will open up opportunities for us to experiment," he says.




McCarthy as Scientist and Engineer, with Personal Recollections

AI Magazine

McCarthy, a past president of AAAI and an AAAI Fellow, helped design the foundation of today's internet-based computing and is widely credited with coining the term, artificial intelligence. This remembrance by Edward Feigenbaum, also a past president of AAAI and a professor emeritus of computer science at Stanford University, was delivered at the celebration of John McCarthy's accomplishments, held at Stanford on 25 March 2012. Everyone knew everyone else, and saw them at the few conference panels that were held. At one of those conferences, I met John. We renewed contact upon his rearrival at Stanford, and that was to have major consequences for my professional life.


Planning, Executing, and Evaluating the Winograd Schema Challenge

AI Magazine

The Winograd Schema Challenge (WSC) was proposed by Hector Levesque in 2011 as an alternative to the Turing test. Chief among its features is a simple question format that can span many commonsense knowledge domains. Questions are chosen so that they do not require specialized knoweldge or training and are easy for humans to answer. This article details our plans to run the WSC and evaluate results. Turing (1950) had first introduced the notion of testing a computer system's intelligence by assessing whether it could fool a human judge into thinking that it was conversing with a human rather a computer.


CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks

AI Magazine

The recent history of expert systems, for example, highlights how constricting the brittleness and knowledge acquisition bottlenecks are. Moreover, standard software methodology (e.g., working from a detailed "spec") has proven of little use in AI, a field which by definition tackles ill-structured problems. How can these bottlenecks be widened? Attractive, elegant answers have included machine learning, automatic programming, and natural language understanding. But decades of work on such systems (Green et al., 1974; Lenat et al., 1983; Lenat & Brown, 1984; Schank & Abelson, 1977) have convinced us that each of these approaches has difficulty "scaling up" for want of a substantial base of real world knowledge.