Goto

Collaborating Authors

 Weichwald, Sebastian


What is causal about causal models and representations?

arXiv.org Machine Learning

Causal Bayesian networks are 'causal' models since they make predictions about interventional distributions. To connect such causal model predictions to real-world outcomes, we must determine which actions in the world correspond to which interventions in the model. For example, to interpret an action as an intervention on a treatment variable, the action will presumably have to a) change the distribution of treatment in a way that corresponds to the intervention, and b) not change other aspects, such as how the outcome depends on the treatment; while the marginal distributions of some variables may change as an effect. We introduce a formal framework to make such requirements for different interpretations of actions as interventions precise. We prove that the seemingly natural interpretation of actions as interventions is circular: Under this interpretation, every causal Bayesian network that correctly models the observational distribution is trivially also interventionally valid, and no action yields empirical data that could possibly falsify such a model. We prove an impossibility result: No interpretation exists that is non-circular and simultaneously satisfies a set of natural desiderata. Instead, we examine non-circular interpretations that may violate some desiderata and show how this may in turn enable the falsification of causal models. By rigorously examining how a causal Bayesian network could be a 'causal' model of the world instead of merely a mathematical object, our formal framework contributes to the conceptual foundations of causal representation learning, causal discovery, and causal abstraction, while also highlighting some limitations of existing approaches.


All or None: Identifiable Linear Properties of Next-token Predictors in Language Modeling

arXiv.org Machine Learning

In natural language processing, it is well-established that linear relationships between highdimensional, real-valued vector representations of textual inputs reflect semantic and syntactic patterns. This was motivated in seminal works [4, 5, 6, 7, 8] and extensively validated in word embedding models [9, 10, 11] as well as modern large language models trained for next-token prediction [2, 12, 13, 14, 15, 16, 17, 18, 19]. This ubiquity is puzzling, as different internal representations can produce identical next-token distributions, resulting in distribution-equivalent but internally distinct models. This raises a key question: Are the observed linear properties shared across all models with the same next-token distribution? Our main result is a mathematical proof that, under suitable conditions, certain linear properties hold for either all or none of the equivalent models generating a given next-token distribution. We demonstrate this through three main contributions. The first main contribution (Section 3) is an identifiability result characterizing distributionequivalent next-token predictors. Our result is a generalization of the main theorems by Roeder et al. [3] and Khemakhem et al. [20], relaxing the assumptions of diversity and equal representation dimensionality. This result is of independent interest for research on identifiable representation learning since our analysis is applicable to several discriminative models beyond next-token prediction [3].


Adjustment Identification Distance: A gadjid for Causal Structure Learning

arXiv.org Machine Learning

Evaluating graphs learned by causal discovery algorithms is difficult: The number of edges that differ between two graphs does not reflect how the graphs differ with respect to the identifying formulas they suggest for causal effects. We introduce a framework for developing causal distances between graphs which includes the structural intervention distance for directed acyclic graphs as a special case. We use this framework to develop improved adjustment-based distances as well as extensions to completed partially directed acyclic graphs and causal orders. We develop polynomial-time reachability algorithms to compute the distances efficiently. In our package gadjid (open source at https://github.com/CausalDisco/gadjid), we provide implementations of our distances; they are orders of magnitude faster than the structural intervention distance and thereby provide a success metric for causal discovery that scales to graph sizes that were previously prohibitive.


A Scale-Invariant Sorting Criterion to Find a Causal Order in Additive Noise Models

arXiv.org Machine Learning

Additive Noise Models (ANMs) are a common model class for causal discovery from observational data and are often used to generate synthetic data for causal discovery benchmarking. Specifying an ANM requires choosing all parameters, including those not fixed by explicit assumptions. Reisach et al. (2021) show that sorting variables by increasing variance often yields an ordering close to a causal order and introduce var-sortability to quantify this alignment. Since increasing variances may be unrealistic and are scale-dependent, ANM data are often standardized in benchmarks. We show that synthetic ANM data are characterized by another pattern that is scale-invariant: the explainable fraction of a variable's variance, as captured by the coefficient of determination $R^2$, tends to increase along the causal order. The result is high $R^2$-sortability, meaning that sorting the variables by increasing $R^2$ yields an ordering close to a causal order. We propose an efficient baseline algorithm termed $R^2$-SortnRegress that exploits high $R^2$-sortability and that can match and exceed the performance of established causal discovery algorithms. We show analytically that sufficiently high edge weights lead to a relative decrease of the noise contributions along causal chains, resulting in increasingly deterministic relationships and high $R^2$. We characterize $R^2$-sortability for different simulation parameters and find high values in common settings. Our findings reveal high $R^2$-sortability as an assumption about the data generating process relevant to causal discovery and implicit in many ANM sampling schemes. It should be made explicit, as its prevalence in real-world data is unknown. For causal discovery benchmarking, we implement $R^2$-sortability, the $R^2$-SortnRegress algorithm, and ANM simulation procedures in our library CausalDisco at https://causaldisco.github.io/CausalDisco/.


Unfair Utilities and First Steps Towards Improving Them

arXiv.org Artificial Intelligence

A challenge in algorithmic fairness is to formalize the notion of fairness. Often, one attribute S is considered protected (also called sensitive) and a quantity Y is to be predicted as Ŷ from some covariates X. Many criteria for fairness correspond to constraints on the joint distribution of (S,X,Y,Ŷ) that can often be phrased as (conditional) independence statements or take the causal structure of the problem into account [see, for example, Barocas et al., 2023, Verma and Rubin, 2018, Nilforoshan et al., 2022, for an overview]. In this work, we propose an alternative point of view that considers situations where an agent aims to optimize a policy as to maximize a known utility. In such scenarios, unwanted discrimination may occur if the utility itself is unfair.


Identifying Causal Effects using Instrumental Time Series: Nuisance IV and Correcting for the Past

arXiv.org Machine Learning

Instrumental variable (IV) regression relies on instruments to infer causal effects from observational data with unobserved confounding. We consider IV regression in time series models, such as vector auto-regressive (VAR) processes. Direct applications of i.i.d. techniques are generally inconsistent as they do not correctly adjust for dependencies in the past. In this paper, we propose methodology for constructing identifying equations that can be used for consistently estimating causal effects. To do so, we develop nuisance IV, which can be of interest even in the i.i.d. case, as it generalizes existing IV methods. We further propose a graph marginalization framework that allows us to apply nuisance and other IV methods in a principled way to time series. Our framework builds on the global Markov property, which we prove holds for VAR processes. For VAR(1) processes, we prove identifiability conditions that relate to Jordan forms and are different from the well-known rank conditions in the i.i.d. case (they do not require as many instruments as covariates, for example). We provide methods, prove their consistency, and show how the inferred causal effect can be used for distribution generalization. Simulation experiments corroborate our theoretical results. We provide ready-to-use Python code.


Learning by Doing: Controlling a Dynamical System using Causality, Control, and Reinforcement Learning

arXiv.org Machine Learning

Questions in causality, control, and reinforcement learning go beyond the classical machine learning task of prediction under i.i.d. observations. Instead, these fields consider the problem of learning how to actively perturb a system to achieve a certain effect on a response variable. Arguably, they have complementary views on the problem: In control, one usually aims to first identify the system by excitation strategies to then apply model-based design techniques to control the system. In (non-model-based) reinforcement learning, one directly optimizes a reward. In causality, one focus is on identifiability of causal structure. We believe that combining the different views might create synergies and this competition is meant as a first step toward such synergies. The participants had access to observational and (offline) interventional data generated by dynamical systems. Track CHEM considers an open-loop problem in which a single impulse at the beginning of the dynamics can be set, while Track ROBO considers a closed-loop problem in which control variables can be set at each time step. The goal in both tracks is to infer controls that drive the system to a desired state. Code is open-sourced ( https://github.com/LearningByDoingCompetition/learningbydoing-comp ) to reproduce the winning solutions of the competition and to facilitate trying out new methods on the competition tasks.


Compositional Abstraction Error and a Category of Causal Models

arXiv.org Artificial Intelligence

Interventional causal models describe joint distributions over some variables used to describe a system, one for each intervention setting. They provide a formal recipe for how to move between joint distributions and make predictions about the variables upon intervening on the system. Yet, it is difficult to formalise how we may change the underlying variables used to describe the system, say from fine-grained to coarse-grained variables. Here, we argue that compositionality is a desideratum for model transformations and the associated errors. We develop a framework for model transformations and abstractions with a notion of error that is compositional: when abstracting a reference model M modularly, first obtaining M' and then further simplifying that to obtain M'', then the composite transformation from M to M'' exists and its error can be bounded by the errors incurred by each individual transformation step. Category theory, the study of mathematical objects via the compositional transformations between them, offers a natural language for developing our framework. We introduce a category of finite interventional causal models and, leveraging theory of enriched categories, prove that our framework enjoys the desired compositionality properties.


Beware of the Simulated DAG! Varsortability in Additive Noise Models

arXiv.org Machine Learning

Additive noise models are a class of causal models in which each variable is defined as a function of its causes plus independent noise. In such models, the ordering of variables by marginal variances may be indicative of the causal order. We introduce varsortability as a measure of agreement between the ordering by marginal variance and the causal order. We show how varsortability dominates the performance of continuous structure learning algorithms on synthetic data. On real-world data, varsortability is an implausible and untestable assumption and we find no indication of high varsortability. We aim to raise awareness that varsortability easily occurs in simulated additive noise models. We provide a baseline method that explicitly exploits varsortability and advocate reporting varsortability in benchmarking data.


groupICA: Independent component analysis for grouped data

arXiv.org Machine Learning

We introduce groupICA, a novel independent component analysis (ICA) algorithm which decomposes linearly mixed multivariate observations into independent components that are corrupted (and rendered dependent) by hidden group-wise confounding. It extends the ordinary ICA model in a theoretically sound and explicit way to incorporate group-wise (or environment-wise) structure in data and hence provides a justified alternative to the use of ICA on data blindly pooled across groups. In addition to our theoretical framework, we explain its causal interpretation and motivation, provide an efficient estimation procedure and prove identifiability of the unmixing matrix under mild assumptions. Finally, we illustrate the performance and robustness of our method on simulated data and run experiments on publicly available EEG datasets demonstrating the applicability to real-world scenarios. We provide a scikit-learn compatible pip-installable Python package groupICA as well as R and Matlab implementations accompanied by a documentation and an audible example at https://sweichwald.de/groupICA.