Goto

Collaborating Authors

A theoretical study of Y structures for causal discovery

arXiv.org Artificial Intelligence

There are several existing algorithms that under appropriate assumptions can reliably identify a subset of the underlying causal relationships from observational data. This paper introduces the first computationally feasible score-based algorithm that can reliably identify causal relationships in the large sample limit for discrete models, while allowing for the possibility that there are unobserved common causes. In doing so, the algorithm does not ever need to assign scores to causal structures with unobserved common causes. The algorithm is based on the identification of so called Y substructures within Bayesian network structures that can be learned from observational data. An example of a Y substructure is A -> C, B -> C, C -> D. After providing background on causal discovery, the paper proves the conditions under which the algorithm is reliable in the large sample limit.


Learning Bayesian Networks: The Combination of Knowledge and Statistical Data

AAAI Conferences

We describe algorithms for learning Bayesian networks from a combination of user knowledge and statistical data. The algorithms have two components: a scoring metric and a search procedure. The scoring metric takes a network structure, statistical data, and a user's prior knowledge, and returns a score proportional to the posterior probability of the network structure given the data. The search procedure generates networks for evaluation by the scoring metric.


Causal Network Learning from Multiple Interventions of Unknown Manipulated Targets

arXiv.org Machine Learning

In this paper, we discuss structure learning of causal networks from multiple data sets obtained by external intervention experiments where we do not know what variables are manipulated. For example, the conditions in these experiments are changed by changing temperature or using drugs, but we do not know what target variables are manipulated by the external interventions. From such data sets, the structure learning becomes more difficult. For this case, we first discuss the identifiability of causal structures. Next we present a graph-merging method for learning causal networks for the case that the sample sizes are large for these interventions. Then for the case that the sample sizes of these interventions are relatively small, we propose a data-pooling method for learning causal networks in which we pool all data sets of these interventions together for the learning. Further we propose a re-sampling approach to evaluate the edges of the causal network learned by the data-pooling method. Finally we illustrate the proposed learning methods by simulations.


A Primer on Causal Analysis

arXiv.org Machine Learning

We provide a conceptual map to navigate causal analysis problems. Focusing on the case of discrete random variables, we consider the case of causal effect estimation from observational data. The presented approaches apply also to continuous variables, but the issue of estimation becomes more complex. We then introduce the four schools of thought for causal analysis


Estimating Causal Direction and Confounding of Two Discrete Variables

arXiv.org Machine Learning

We propose a method to classify the causal relationship between two discrete variables given only the joint distribution of the variables, acknowledging that the method is subject to an inherent baseline error. We assume that the causal system is acyclicity, but we do allow for hidden common causes. Our algorithm presupposes that the probability distributions $P(C)$ of a cause $C$ is independent from the probability distribution $P(E\mid C)$ of the cause-effect mechanism. While our classifier is trained with a Bayesian assumption of flat hyperpriors, we do not make this assumption about our test data. This work connects to recent developments on the identifiability of causal models over continuous variables under the assumption of "independent mechanisms". Carefully-commented Python notebooks that reproduce all our experiments are available online at http://vision.caltech.edu/~kchalupk/code.html.