Goto

Collaborating Authors

 Jørgensen, Frederik Hytting


What is causal about causal models and representations?

arXiv.org Machine Learning

Causal Bayesian networks are 'causal' models since they make predictions about interventional distributions. To connect such causal model predictions to real-world outcomes, we must determine which actions in the world correspond to which interventions in the model. For example, to interpret an action as an intervention on a treatment variable, the action will presumably have to a) change the distribution of treatment in a way that corresponds to the intervention, and b) not change other aspects, such as how the outcome depends on the treatment; while the marginal distributions of some variables may change as an effect. We introduce a formal framework to make such requirements for different interpretations of actions as interventions precise. We prove that the seemingly natural interpretation of actions as interventions is circular: Under this interpretation, every causal Bayesian network that correctly models the observational distribution is trivially also interventionally valid, and no action yields empirical data that could possibly falsify such a model. We prove an impossibility result: No interpretation exists that is non-circular and simultaneously satisfies a set of natural desiderata. Instead, we examine non-circular interpretations that may violate some desiderata and show how this may in turn enable the falsification of causal models. By rigorously examining how a causal Bayesian network could be a 'causal' model of the world instead of merely a mathematical object, our formal framework contributes to the conceptual foundations of causal representation learning, causal discovery, and causal abstraction, while also highlighting some limitations of existing approaches.


Unfair Utilities and First Steps Towards Improving Them

arXiv.org Artificial Intelligence

A challenge in algorithmic fairness is to formalize the notion of fairness. Often, one attribute S is considered protected (also called sensitive) and a quantity Y is to be predicted as Ŷ from some covariates X. Many criteria for fairness correspond to constraints on the joint distribution of (S,X,Y,Ŷ) that can often be phrased as (conditional) independence statements or take the causal structure of the problem into account [see, for example, Barocas et al., 2023, Verma and Rubin, 2018, Nilforoshan et al., 2022, for an overview]. In this work, we propose an alternative point of view that considers situations where an agent aims to optimize a policy as to maximize a known utility. In such scenarios, unwanted discrimination may occur if the utility itself is unfair.