Model-Robust Counterfactual Prediction Method

arXiv.org Machine Learning

We develop a method for assessing counterfactual predictions with multiple groups. It is tuning-free and operational in high-dimensional covariate scenarios, with a runtime that scales linearly in the number of datapoints. The computational efficiency is leveraged to produce valid confidence intervals using the conformal prediction approach. The method is model-robust in that it enables inferences from observational data even when the data model is misspecified. The approach is illustrated using both real and synthetic datasets.



Counterfactual Off-Policy Evaluation with Gumbel-Max Structural Causal Models

arXiv.org Machine Learning

We introduce an off-policy evaluation procedure for highlighting episodes where applying a reinforcement learned (RL) policy is likely to have produced a substantially different outcome than the observed policy. In particular, we introduce a class of structural causal models (SCMs) for generating counterfactual trajectories in finite partially observable Markov Decision Processes (POMDPs). We see this as a useful procedure for off-policy "debugging" in high-risk settings (e.g., healthcare); by decomposing the expected difference in reward between the RL and observed policy into specific episodes, we can identify episodes where the counterfactual difference in reward is most dramatic. This in turn can be used to facilitate review of specific episodes by domain experts. We demonstrate the utility of this procedure with a synthetic environment of sepsis management.


Learning Representations for Counterfactual Inference

arXiv.org Machine Learning

Observational studies are rising in importance due to the widespread accumulation of data in fields such as healthcare, education, employment and ecology. We consider the task of answering counterfactual questions such as, "Would this patient have lower blood sugar had she received a different medication?". We propose a new algorithmic framework for counterfactual inference which brings together ideas from domain adaptation and representation learning. In addition to a theoretical justification, we perform an empirical comparison with previous approaches to causal inference from observational data. Our deep learning algorithm significantly outperforms the previous state-of-the-art.


MultiVerse: Causal Reasoning using Importance Sampling in Probabilistic Programming

arXiv.org Artificial Intelligence

Counterfactuals are particularly special causal questions as they involve the full suite of causal tools: posterior 1 inference and interventional reasoning (Pearl, 2000). Counterfactuals are probabilistic in nature and difficult to infer, but are powerful for explanation (Wachter et al., 2017; Sokol and Flach, 2018; Guidotti et al., 2018; Pedreschi et al., 2019), fairness Kusner et al. (2017); Zhang and Bareinboim (2018); Russell et al. (2017), policy search (e.g. Buesing et al. (2019)) and are also quantities of interest on their own (e.g.