Luo, Zhiyi (Shanghai Jiao Tong University) | Sha, Yuchen (Shanghai Jiao Tong University) | Zhu, Kenny Q. (Shanghai Jiao Tong University) | Hwang, Seung-Won (Yonsei University) | Wang, Zhongyuan (Microsoft Research Asia)
Commonsense causal reasoning is the process of capturing and understanding the causal dependencies amongst events and actions. Such events and actions can be expressed in terms, phrases or sentences in natural language text. Therefore, one possible way of obtaining causal knowledge is by extracting causal relations between terms or phrases from a large text corpus. However, causal relations in text are sparse, ambiguous, and sometimes implicit, and thus difficult to obtain. This paper attacks the problem of commonsense causality reasoning between short texts (phrases and sentences) using a data driven approach. We propose a framework that automatically harvests a network of causal-effect terms from a large web corpus. Backed by this network, we propose a novel and effective metric to properly model the causality strength between terms. We show these signals can be aggregated for causality reasonings between short texts, including sentences and phrases. In particular, our approach outperforms all previously reported results in the standard SEMEVAL COPA task by substantial margins.
Causal models defined in terms of structural equations have proved to be quite a powerful way of representing knowledge regarding causality. However, a number of authors have given examples that seem to show that the Halpern-Pearl (HP) definition of causality (Halpern & Pearl 2005) gives intuitively unreasonable answers. Here it is shown that, for each of these examples, we can give two stories consistent with the description in the example, such that intuitions regarding causality are quite different for each story.
Computer vision and artificial intelligence research has long danced around the subject of causality: vision researchers use causal relationships to aid action detection, and AI researchers propose methods for causal induction independent of video sensors. In this paper, we argue that learning perceptual causality from video is a necessary step for understanding scenes in video. We explain how current object and action detection is suffering without causality, and we explain how current causality research is suffering without grounding on raw sensors. We then go on to describe one plausible solution for grounding perceptual causality on raw sensors.
We propose a method for learning causal relations within high-dimensional tensor data as they are typically recorded in non-experimental databases. The method allows the simultaneous inclusion of numerous dimensions within the data analysis such as samples, time and domain variables construed as tensors. In such tensor data we exploit and integrate non-Gaussian models and tensor analytic algorithms in a novel way. We prove that we can determine simple causal relations independently of how complex the dimensionality of the data is. We rely on a statistical decomposition that flattens higher-dimensional data tensors into matrices. This decomposition preserves the causal information and is therefore suitable for structure learning of causal graphical models, where a causal relation can be generalised beyond dimension, for example, over all time points. Related methods either focus on a set of samples for instantaneous effects or look at one sample for effects at certain time points. We evaluate the resulting algorithm and discuss its performance both with synthetic and real-world data.
Classically, causality requires that state A has independent information that influences state B. If this happens only in one direction, A is said to causally act on B. In nonlinear dynamical systems, however, interactions are mutual. Their parts cannot be separated in this simple way. A definition of causal efficacy that generalizes the classical unidirectional ("acyclic") notion of causality to the nonseparable bidirectional ("cyclic") case is missing. Harnack et al. propose a mathematically transparent definition of effective causal influences in cyclic dynamical systems. This relies on reconstructions of the system's overall state from measurements. Reconstructions are obtained in parallel from observations at different system components. Although generally the respective reconstructions are topologically equivalent, the mapping among the reconstructions exhibits distortions that reflect effective causal influences.