Commonsense Causal Reasoning between Short Texts

AAAI Conferences

Commonsense causal reasoning is the process of capturing and understanding the causal dependencies amongst events and actions. Such events and actions can be expressed in terms, phrases or sentences in natural language text. Therefore, one possible way of obtaining causal knowledge is by extracting causal relations between terms or phrases from a large text corpus. However, causal relations in text are sparse, ambiguous, and sometimes implicit, and thus difficult to obtain. This paper attacks the problem of commonsense causality reasoning between short texts (phrases and sentences) using a data driven approach. We propose a framework that automatically harvests a network of causal-effect terms from a large web corpus. Backed by this network, we propose a novel and effective metric to properly model the causality strength between terms. We show these signals can be aggregated for causality reasonings between short texts, including sentences and phrases. In particular, our approach outperforms all previously reported results in the standard SEMEVAL COPA task by substantial margins.

Pearl's Causality in a Logical Setting

AAAI Conferences

We provide a logical representation of Pearl's structural causal models in the causal calculus of McCain and Turner (1997) and its first-order generalization by Lifschitz. It will be shown that, under this representation, the nonmonotonic semantics of the causal calculus describes precisely the solutions of the structural equations (the causal worlds of the causal model), while the causal logic from Bochman (2004) is adequate for describing the behavior of causal models under interventions (forming submodels).

On Logics and Semantics of Indeterminate Causation

AAAI Conferences

We will explore the use of disjunctive causal rules for representing indeterminate causation. We provide first a logical formalization of such rules in the form of a disjunctive inference relation, and describe its logical semantics. Then we consider a nonmonotonic semantics for such rules, described in (Turner 1999). It will be shown, however, that, under this semantics, disjunctive causal rules admit a stronger logic in which these rules are reducible to ordinary, singular causal rules. This semantics also tends to give an exclusive interpretation of disjunctive causal effects, and so excludes some reasonable models in particular cases. To overcome these shortcomings, we will introduce an alternative nonmonotonic semantics for disjunctive causal rules, called a covering semantics, that permits an inclusive interpretation of indeterminate causal information. Still, it will be shown that even in this case there exists a systematic procedure, that we will call a normalization, that allows us to capture precisely the covering semantics using only singular causal rules. This normalization procedure can be viewed as a kind of nonmonotonic completion, and it generalizes established ways of representing indeterminate effects in current theories of action.

The Noisy-Logical Distribution and its Application to Causal Inference

Neural Information Processing Systems

We describe a novel noisy-logical distribution for representing the distribution of a binary output variable conditioned on multiple binary input variables. The distribution is represented in terms of noisy-or's and noisy-and-not's of causal features which are conjunctions of the binary inputs. The standard noisy-or and noisy-and-not models, used in causal reasoning and artificial intelligence, are special cases of the noisy-logical distribution. We prove that the noisy-logical distribution is complete in the sense that it can represent all conditional distributions provided a sufficient number of causal factors are used. We illustrate the noisy-logical distribution by showing that it can account for new experimental findings on how humans perform causal reasoning in more complex contexts.


AAAI Conferences

We present a new approach to token-level causal reasoning that we call Sequences Of Mechanisms (SoMs), which models causality as a dynamic sequence of active mechanisms that chain together to propagate causal influence through time. We motivate this approach by using examples from AI and robotics and show why existing approaches are inadequate. We present an algorithm for causal reasoning based on SoMs, which takes as input a knowledge base of first-order mechanisms and a set of observations, and it hypothesizes which mechanisms are active at what time. We show empirically that our algorithm produces plausible causal explanations of simulated observations generated from a causal model. We argue that the SoMs approach is qualitatively closer to the human causal reasoning process, for example, it will only include relevant variables in explanations. We present new insights about causal reasoning that become apparent with this view. One such insight is that observation and manipulation do not commute in causal models, a fact which we show to be a generalization of the Equilibration-Manipulation Commutability of [Dash(2005)].