Commonsense Reasoning

Can AI Achieve Common Sense to Make Machines More Intelligent?


Today machines with artificial intelligence (AI) are becoming more prevalent in society. Across many fields, AI has taken over numerous tasks that humans used to do earlier. As the reference is to human intelligence, artificial intelligence is being modified into what humans can do. However, the technology has not yet matched the level of utmost wisdom possessed by humans and it seems like it is not going to achieve the milestone any time sooner. To replace human beings at most jobs, machines need to exhibit what we intuitively call "common sense".

Connective Cognition Network for Directional Visual Commonsense Reasoning

Neural Information Processing Systems

Visual commonsense reasoning (VCR) has been introduced to boost research of cognition-level visual understanding, i.e., a thorough understanding of correlated details of the scene plus an inference with related commonsense knowledge. Recent studies on neuroscience have suggested that brain function or cognition can be described as a global and dynamic integration of local neuronal connectivity, which is context-sensitive to specific cognition tasks. Inspired by this idea, towards VCR, we propose a connective cognition network (CCN) to dynamically reorganize the visual neuron connectivity that is contextualized by the meaning of questions and answers. Concretely, we first develop visual neuron connectivity to fully model correlations of visual content. Then, a contextualization process is introduced to fuse the sentence representation with that of visual neurons.

Heterogeneous Graph Learning for Visual Commonsense Reasoning

Neural Information Processing Systems

Visual commonsense reasoning task aims at leading the research field into solving cognition-level reasoning with the ability to predict correct answers and meanwhile providing convincing reasoning paths, resulting in three sub-tasks i.e., Q- A, QA- R and Q- AR. It poses great challenges over the proper semantic alignment between vision and linguistic domains and knowledge reasoning to generate persuasive reasoning paths. Existing works either resort to a powerful end-to-end network that cannot produce interpretable reasoning paths or solely explore intra-relationship of visual objects (homogeneous graph) while ignoring the cross-domain semantic alignment among visual concepts and linguistic words. In this paper, we propose a new Heterogeneous Graph Learning (HGL) framework for seamlessly integrating the intra-graph and inter-graph reasoning in order to bridge the vision and language domain. Our HGL consists of a primal vision-to-answer heterogeneous graph (VAHG) module and a dual question-to-answer heterogeneous graph (QAHG) module to interactively refine reasoning paths for semantic agreement.

Commonsense Reasoning


Nuance is no longer sponsoring the competition, and the $25,000 prize mentioned below is no longer offered. The challenge lives on in the many research groups, at Microsoft Research, Facebook, and the Allen Institute, among other places, that are currently (as of 2019) working on aspects of the problem. Commonsense Reasoning is keen to promote the Winograd Schema Challenge and Nuance Communications' competition to successfully pass an alternative to the Turing Test. Background: The Turing Test is intended to serve as a test of whether a machine has achieved human-level intelligence. In one of its best-known versions, a person attempts to determine whether he or she is conversing (via text) with a human or a machine.

Using ConceptNet to Teach Common Sense to an Automated Theorem Prover Artificial Intelligence

In recent years, numerous benchmarks for commonsense reasoning have been presented which cover different areas: the Choice of Plausible Alternatives Challenge (COP A) [17] requires causal reasoning in everyday situations, the Winograd Schema Challenge [8] addresses difficult cases of pronoun disambiguation, the TriangleCOP A Challenge [9] focuses on human relationships and emotions, and the Story Cloze Test with the ROCStories Corpora [11] focuses on the ability to determine a plausible ending for a given short story, to name just a few. In our system, we focus on the COP A challenge where each problem consists of a problem description (the premise), a question, and two answer candidates (called alternatives). See Figure 1 for an example. Most approaches tackling these problems are based on machine learning or exploit statistical properties of the natural language input (see e.g.

Evaluating Commonsense in Pre-trained Language Models Artificial Intelligence

Contextualized representations trained over large raw text data have given remarkable improvements for NLP tasks including question answering and reading comprehension. There have been works showing that syntactic, semantic and word sense knowledge are contained in such representations, which explains why they benefit such tasks. However, relatively little work has been done investigating commonsense knowledge contained in contextualized representations, which is crucial for human question answering and reading comprehension. We study the commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven challenging benchmarks, finding that language modeling and its variants are effective objectives for promoting models' commonsense ability while bidirectional context and larger training set are bonuses. We additionally find that current models do poorly on tasks require more necessary inference steps. Finally, we test the robustness of models by making dual test cases, which are correlated so that the correct prediction of one sample should lead to correct prediction of the other. Interestingly, the models show confusion on these test cases, which suggests that they learn commonsense at the surface rather than the deep level. We release a test set, named CA Ts publicly, for future research. Introduction Contextualized representations trained over large-scale text data have given remarkable improvements to a wide range of NLP tasks, including natural language inference (Bowman et al. 2015), question answering (Rajpurkar, Jia, and Liang 2018) and reading comprehension (Lai et al. 2017). Giving new state-of-the-art results that approach or surpass human performance on several benchmark datasets, it is an interesting question what types of knowledge are learned in pre-trained contextualized representations in order to better understand how they benefit the NLP problems above. There has been work investigating the nature of syntactic (Liu et al. 2019a), semantic (Liu et al. 2019a) and word sense (Kim et al. 2019) knowledge contained in such contextualized representations, in particular BERT (Devlin et al. 2019), showing Work done while at Westlake University Copyright c null 2020, Association for the Advancement of Artificial Intelligence (

Top k Memory Candidates in Memory Networks for Common Sense Reasoning Artificial Intelligence

Successful completion of reasoning task requires the agent to have relevant prior knowledge or some given context of the world dynamics. Usually, the information provided to the system for a reasoning task is just the query or some supporting story, which is often not enough for common reasoning tasks. The goal here is that, if the information provided along the question is not sufficient to correctly answer the question, the model should choose k most relevant documents that can aid its inference process. In this work, the model dynamically selects top k most relevant memory candidates that can be used to successfully solve reasoning tasks. Experiments were conducted on a subset of Winograd Schema Challenge (WSC) problems to show that the proposed model has the potential for commonsense reasoning. The WSC is a test of machine intelligence, designed to be an improvement on the Turing test.

CommonGen: A Constrained Text Generation Dataset Towards Generative Commonsense Reasoning Artificial Intelligence

Rational humans can generate sentences that cover a certain set of concepts while describing natural and common scenes. For example, given {apple(noun), tree(noun), pick(verb)}, humans can easily come up with scenes like "a boy is picking an apple from a tree" via their generative commonsense reasoning ability. However, we find this capacity has not been well learned by machines. Most prior works in machine commonsense focus on discriminative reasoning tasks with a multi-choice question answering setting. Herein, we present CommonGen: a challenging dataset for testing generative commonsense reasoning with a constrained text generation task. We collect 37k concept-sets as inputs and 90k human-written sentences as associated outputs. Additionally, we also provide high-quality rationales behind the reasoning process for the development and test sets from the human annotators. We demonstrate the difficulty of the task by examining a wide range of sequence generation methods with both automatic metrics and human evaluation. The state-of-the-art pre-trained generation model, UniLM, is still far from human performance in this task. Our data and code is publicly available at .

Holding Algorithms Accountable


Artificial intelligence programs are extremely good at finding subtle patterns in enormous amounts of data, but don't understand the meaning of anything. Whether you are searching the Internet on Google, browsing your news feed on Facebook, or finding the quickest route on a traffic app like Waze, an algorithm is at the root of it. Algorithms have permeated our daily lives; they help to simplify, distill, process, and provide insights from massive amounts of data. According to Ernest Davis, a professor of computer science at New York University's Courant Institute of Mathematical Sciences whose research centers on the automation of common-sense reasoning, the technologies that currently exist for artificial intelligence (AI) programs are extremely good at finding subtle patterns in enormous amounts of data. "One way or another," he says, "that is how they work."