Invariant Risk Minimization

arXiv.org Artificial Intelligence

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions. To achieve this goal, IRM learns a data representation such that the optimal classifier, on top of that data representation, matches for all training distributions. Through theory and experiments, we show how the invariances learned by IRM relate to the causal structures governing the data and enable out-of-distribution generalization.


Deep causal representation learning for unsupervised domain adaptation

arXiv.org Machine Learning

Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we transition to higher layers in the model, the representations become more task-specific and less generalizable. Recent research on deep domain adaptation proposed to mitigate this problem by forcing the deep model to learn more transferable feature representations across domains. This is achieved by incorporating domain adaptation methods into deep learning pipeline. The majority of existing models learn the transferable feature representations which are highly correlated with the outcome. However, correlations are not always transferable. In this paper, we propose a novel deep causal representation learning framework for unsupervised domain adaptation, in which we propose to learn domain-invariant causal representations of the input from the source domain. We simulate a virtual target domain using reweighted samples from the source domain and estimate the causal effect of features on the outcomes. The extensive comparative study demonstrates the strengths of the proposed model for unsupervised domain adaptation via causal representations.


I-SPEC: An End-to-End Framework for Learning Transportable, Shift-Stable Models

arXiv.org Artificial Intelligence

Shifts in environment between development and deployment cause classical supervised learning to produce models that fail to generalize well to new target distributions. Recently, many solutions which find invariant predictive distributions have been developed. Among these, graph-based approaches do not require data from the target environment and can capture more stable information than alternative methods which find stable feature sets. However, these approaches assume that the data generating process is known in the form of a full causal graph, which is generally not the case. In this paper, we propose I-SPEC, an end-to-end framework that addresses this shortcoming by using data to learn a partial ancestral graph (PAG). Using the PAG we develop an algorithm that determines an interventional distribution that is stable to the declared shifts; this subsumes existing approaches which find stable feature sets that are less accurate. We apply I-SPEC to a mortality prediction problem to show it can learn a model that is robust to shifts without needing upfront knowledge of the full causal DAG.


Design Thinking Humanizes Data Science

#artificialintelligence

The article "Cognitive Hub: The Future of Work" and the supporting infographic (see Figure 1) provides an interesting perspective on some "technology combinations" that could transform the workplace of the future, all enabled by Artificial Intelligence (AI): The infographic above is very cool and depicts a very interesting proposition. However, my concern with the proposition is that while these technology combinations could be quite powerful, the Internet of Things, Human-Machine Interfaces, Cyber physical systems and Artificial Intelligence are only enabling technologies, that is, they only give someone or something the means to do something. You still need someone or something to actually do something; to decide what to do, when to do it, where to do it, with whom to do it, how to do it, the required items to do it, etc. There is a H-U-G-E difference between enabling and doing. For example, I can enable you with an individualized diet and fitness plan that will improve your life, but the subsequent improvement in your life won't happen if you are not doing it.


Improving Model Robustness Using Causal Knowledge

arXiv.org Machine Learning

For decades, researchers in fields, such as the natural and social sciences, have been verifying causal relationships and investigating hypotheses that are now well-established or understood as truth. These causal mechanisms are properties of the natural world, and thus are invariant conditions regardless of the collection domain or environment. We show in this paper how prior knowledge in the form of a causal graph can be utilized to guide model selection, i.e., to identify from a set of trained networks the models that are the most robust and invariant to unseen domains. Our method incorporates prior knowledge (which can be incomplete) as a Structural Causal Model (SCM) and calculates a score based on the likelihood of the SCM given the target predictions of a candidate model and the provided input variables. We show on both publicly available and synthetic datasets that our method is able to identify more robust models in terms of generalizability to unseen out-of-distribution test examples and domains where covariates have shifted.