Goto

Collaborating Authors

A Survey on Causal Inference

arXiv.org Artificial Intelligence

Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics, for decades. Nowadays, estimating causal effect from observational data has become an appealing research direction owing to the large amount of available data and low budget requirement, compared with randomized controlled trials. Embraced with the rapidly developed machine learning area, various causal effect estimation methods for observational data have sprung up. In this survey, we provide a comprehensive review of causal inference methods under the potential outcome framework, one of the well known causal inference framework. The methods are divided into two categories depending on whether they require all three assumptions of the potential outcome framework or not. For each category, both the traditional statistical methods and the recent machine learning enhanced methods are discussed and compared. The plausible applications of these methods are also presented, including the applications in advertising, recommendation, medicine and so on. Moreover, the commonly used benchmark datasets as well as the open-source codes are also summarized, which facilitate researchers and practitioners to explore, evaluate and apply the causal inference methods.


Deep Learning for Causal Inference

arXiv.org Machine Learning

In this paper, we propose deep learning techniques for econometrics, specifically for causal inference and for estimating individual as well as average treatment effects. The contribution of this paper is twofold: 1. For generalized neighbor matching to estimate individual and average treatment effects, we analyze the use of autoencoders for dimensionality reduction while maintaining the local neighborhood structure among the data points in the embedding space. This deep learning based technique is shown to perform better than simple k nearest neighbor matching for estimating treatment effects, especially when the data points have several features/covariates but reside in a low dimensional manifold in high dimensional space. We also observe better performance than manifold learning methods for neighbor matching. 2. Propensity score matching is one specific and popular way to perform matching in order to estimate average and individual treatment effects. We propose the use of deep neural networks (DNNs) for propensity score matching, and present a network called PropensityNet for this. This is a generalization of the logistic regression technique traditionally used to estimate propensity scores and we show empirically that DNNs perform better than logistic regression at propensity score matching. Code for both methods will be made available shortly on Github at: https://github.com/vikas84bf


Apply Propensity Score Methods in Causal Inference -- Part 1: Stratification

#artificialintelligence

This article introduces and implements the framework of propensity score method from Dehejia and Wahba (1999) "Causal Effects in Non-Experimental Studies: Reevaluating the Evaluation of Training Programs," Journal of the American Statistical Association, Vol. I will briefly go over the theories and then walk through how I implemented the stratification matching step by step. The full Python code is provided at the end of the article. The intuition of propensity score method is: instead of conditioning on the full vector of covariates Xᵢ, which can get difficult when there are many pre-treatment variables and when the treatment and comparison groups are very different, we try to condition on the propensity score estimated with Xᵢ. Propensity score matching works in the same way as covariate matching except that we match on the score instead of the covariates directly.


Feature Selection for Causal Inference from High Dimensional Observational Data with Outcome Adaptive Elastic Net

arXiv.org Artificial Intelligence

Feature selection is an extensively studied technique in the machine learning literature where the main objective is to identify the subset of features that provides the highest predictive power. However, in causal inference, our goal is to identify the set of variables that are associated with both the treatment variable and outcome (i.e., the confounders). While controlling for the confounding variables helps us to achieve an unbiased estimate of causal effect, recent research shows that controlling for purely outcome predictors along with the confounders can reduce the variance of the estimate. In this paper, we propose an Outcome Adaptive Elastic-Net (OAENet) method specifically designed for causal inference to select the confounders and outcome predictors for inclusion in the propensity score model or in the matching mechanism. OAENet provides two major advantages over existing methods: it performs superiorly on correlated data, and it can be applied to any matching method and any estimates. In addition, OAENet is computationally efficient compared to state-of-the-art methods.


DoWhy: An End-to-End Library for Causal Inference

arXiv.org Artificial Intelligence

In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent. However, most libraries for causal inference focus only on the task of providing powerful statistical estimators. We describe DoWhy, an open-source Python library that is built with causal assumptions as its first-class citizens, based on the formal framework of causal graphs to specify and test causal assumptions. DoWhy presents an API for the four steps common to any causal analysis---1) modeling the data using a causal graph and structural assumptions, 2) identifying whether the desired effect is estimable under the causal model, 3) estimating the effect using statistical estimators, and finally 4) refuting the obtained estimate through robustness checks and sensitivity analyses. In particular, DoWhy implements a number of robustness checks including placebo tests, bootstrap tests, and tests for unoberved confounding. DoWhy is an extensible library that supports interoperability with other implementations, such as EconML and CausalML for the the estimation step. The library is available at https://github.com/microsoft/dowhy