Goto

Collaborating Authors

A Primer on Causal Analysis

arXiv.org Machine Learning

We provide a conceptual map to navigate causal analysis problems. Focusing on the case of discrete random variables, we consider the case of causal effect estimation from observational data. The presented approaches apply also to continuous variables, but the issue of estimation becomes more complex. We then introduce the four schools of thought for causal analysis


Commonsense Causal Reasoning between Short Texts

AAAI Conferences

Commonsense causal reasoning is the process of capturing and understanding the causal dependencies amongst events and actions. Such events and actions can be expressed in terms, phrases or sentences in natural language text. Therefore, one possible way of obtaining causal knowledge is by extracting causal relations between terms or phrases from a large text corpus. However, causal relations in text are sparse, ambiguous, and sometimes implicit, and thus difficult to obtain. This paper attacks the problem of commonsense causality reasoning between short texts (phrases and sentences) using a data driven approach. We propose a framework that automatically harvests a network of causal-effect terms from a large web corpus. Backed by this network, we propose a novel and effective metric to properly model the causality strength between terms. We show these signals can be aggregated for causality reasonings between short texts, including sentences and phrases. In particular, our approach outperforms all previously reported results in the standard SEMEVAL COPA task by substantial margins.


Causal Learning versus Reinforcement Learning for Knowledge Learning and Problem Solving

AAAI Conferences

Causal learning and reinforcement learning are both important AI learning mechanisms but are usually treated separately, despite the fact that both are directly relevant to problem solving processes. In this paper we propose a method for causal learning and problem solving, and compare and contrast that with AI reinforcement learning and show that the two methods are actually related, differing only in the values of the learning rate α and discount factor γ. However, the causal learning framework emphasizes quick but non-optimal concoction of problem solutions while AI reinforcement learning generates optimal solutions at the expense of speed. Cognitive science literature is reviewed and it is found that psychological reinforcement learning in lower form animals such as mammals is distinct from AI reinforcement learning in that psychological reinforcement learning strives neither for speed nor optimality, and that higher form animals such as humans and primates employ quick causal learning for survival instead of reinforcement learning. AI systems should likewise take advantage of a framework that employs rapid inductive causal learning to generate problem solutions for its general viability in terms of rapid adaptability, without the need to always strive for optimality.


Bayesian Causal Induction

arXiv.org Artificial Intelligence

Discovering causal relationships is a hard task, often hindered by the need for intervention, and often requiring large amounts of data to resolve statistical uncertainty. However, humans quickly arrive at useful causal relationships. One possible reason is that humans extrapolate from past experience to new, unseen situations: that is, they encode beliefs over causal invariances, allowing for sound generalization from the observations they obtain from directly acting in the world. Here we outline a Bayesian model of causal induction where beliefs over competing causal hypotheses are modeled using probability trees. Based on this model, we illustrate why, in the general case, we need interventions plus constraints on our causal hypotheses in order to extract causal information from our experience.


Causal Data Science for Financial Stress Testing

arXiv.org Artificial Intelligence

The most recent financial upheavals have cast doubt on the adequacy of some of the conventional quantitative risk management strategies, such as VaR (Value at Risk), in many common situations. Consequently, there has been an increasing need for verisimilar financial stress testings, namely simulating and analyzing financial portfolios in extreme, albeit rare scenarios. Unlike conventional risk management which exploits statistical correlations among financial instruments, here we focus our analysis on the notion of probabilistic causation, which is embodied by Suppes-Bayes Causal Networks (SBCNs); SBCNs are probabilistic graphical models that have many attractive features in terms of more accurate causal analysis for generating financial stress scenarios. In this paper, we present a novel approach for conducting stress testing of financial portfolios based on SBCNs in combination with classical machine learning classification tools. The resulting method is shown to be capable of correctly discovering the causal relationships among financial factors that affect the portfolios and thus, simulating stress testing scenarios with a higher accuracy and lower computational complexity than conventional Monte Carlo Simulations.