Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms

Karlsson, Rickard K. A., Krijthe, Jesse H.

arXiv.org Machine Learning 

Using observational studies to estimate treatment effects is a ubiquitous yet challenging task in many disciplines, such as medicine [Hernán and Robins, 2006] or social sciences [Athey and Imbens, 2017]. Whereas there exists a rich literature of methods for treatment effect estimation in the observational setting [Bang and Robins, 2005, Wager and Athey, 2018, Chernozhukov et al., 2018], all methods have in common that before a causal effect can be estimated, often untestable conditions need to hold. One such condition is that we assume there is no unmeasured confounding, meaning that there are no unobserved factors that have both an influence on the treatment and on the outcome of interest that are not accounted for by the method. If unmeasured confounders are present, our causal effect estimates are likely to be biased and inconsistent [Greenland et al., 1999]. This can have serious downstream consequences such as unknowingly recommending a non-effective or, even worse, potentially harmful treatment policy. Unfortunately, without making further assumptions, it is in general impossible to verify all assumptions needed to identify treatment effects from observational data. In this work, we investigate a novel strategy for falsifying unconfoundedness. Specifically, we focus on the common scenario where observational datasets are collected from different heterogeneous sources, which we refer to as environments.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found