Not enough data to create a plot.
Try a different view from the menu above.
Weichwald, Sebastian
Causal Consistency of Structural Equation Models
Rubenstein, Paul K., Weichwald, Sebastian, Bongers, Stephan, Mooij, Joris M., Janzing, Dominik, Grosse-Wentrup, Moritz, Schölkopf, Bernhard
Complex systems can be modelled at various levels of detail. Ideally, causal models of the same system should be consistent with one another in the sense that they agree in their predictions of the effects of interventions. We formalise this notion of consistency in the case of Structural Equation Models (SEMs) by introducing exact transformations between SEMs. This provides a general language to consider, for instance, the different levels of description in the following three scenarios: (a) models with large numbers of variables versus models in which the `irrelevant' or unobservable variables have been marginalised out; (b) micro-level models versus macro-level models in which the macro-variables are aggregate features of the micro-variables; (c) dynamical time series models versus models of their stationary behaviour. Our analysis stresses the importance of well specified interventions in the causal modelling process and sheds light on the interpretation of cyclic SEMs.
Recovery of non-linear cause-effect relationships from linearly mixed neuroimaging data
Weichwald, Sebastian, Gretton, Arthur, Schölkopf, Bernhard, Grosse-Wentrup, Moritz
Causal inference concerns the identification of cause-effect relationships between variables. However, often only linear combinations of variables constitute meaningful causal variables. For example, recovering the signal of a cortical source from electroencephalography requires a well-tuned combination of signals recorded at multiple electrodes. We recently introduced the MERLiN (Mixture Effect Recovery in Linear Networks) algorithm that is able to recover, from an observed linear mixture, a causal variable that is a linear effect of another given variable. Here we relax the assumption of this cause-effect relationship being linear and present an extended algorithm that can pick up non-linear cause-effect relationships. Thus, the main contribution is an algorithm (and ready to use code) that has broader applicability and allows for a richer model class. Furthermore, a comparative analysis indicates that the assumption of linear cause-effect relationships is not restrictive in analysing electroencephalographic data.
MERLiN: Mixture Effect Recovery in Linear Networks
Weichwald, Sebastian, Grosse-Wentrup, Moritz, Gretton, Arthur
Causal inference concerns the identification of cause-effect relationships between variables, e.g. establishing whether a stimulus affects activity in a certain brain region. The observed variables themselves often do not constitute meaningful causal variables, however, and linear combinations need to be considered. In electroencephalographic studies, for example, one is not interested in establishing cause-effect relationships between electrode signals (the observed variables), but rather between cortical signals (the causal variables) which can be recovered as linear combinations of electrode signals. We introduce MERLiN (Mixture Effect Recovery in Linear Networks), a family of causal inference algorithms that implement a novel means of constructing causal variables from non-causal variables. We demonstrate through application to EEG data how the basic MERLiN algorithm can be extended for application to different (neuroimaging) data modalities. Given an observed linear mixture, the algorithms can recover a causal variable that is a linear effect of another given variable. That is, MERLiN allows us to recover a cortical signal that is affected by activity in a certain brain region, while not being a direct effect of the stimulus. The Python/Matlab implementation for all presented algorithms is available on https://github.com/sweichwald/MERLiN
Pymanopt: A Python Toolbox for Optimization on Manifolds using Automatic Differentiation
Townsend, James, Koep, Niklas, Weichwald, Sebastian
Optimization on manifolds is a class of methods for optimization of an objective function, subject to constraints which are smooth, in the sense that the set of points which satisfy the constraints admits the structure of a differentiable manifold. While many optimization problems are of the described form, technicalities of differential geometry and the laborious calculation of derivatives pose a significant barrier for experimenting with these methods. We introduce Pymanopt (available at https://pymanopt.github.io), a toolbox for optimization on manifolds, implemented in Python, that---similarly to the Manopt Matlab toolbox---implements several manifold geometries and optimization algorithms. Moreover, we lower the barriers to users further by using automated differentiation for calculating derivative information, saving users time and saving them from potential calculation and implementation errors.
Causal and anti-causal learning in pattern recognition for neuroimaging
Weichwald, Sebastian, Schölkopf, Bernhard, Ball, Tonio, Grosse-Wentrup, Moritz
Pattern recognition in neuroimaging distinguishes between two types of models: encoding- and decoding models. This distinction is based on the insight that brain state features, that are found to be relevant in an experimental paradigm, carry a different meaning in encoding- than in decoding models. In this paper, we argue that this distinction is not sufficient: Relevant features in encoding- and decoding models carry a different meaning depending on whether they represent causal- or anti-causal relations. We provide a theoretical justification for this argument and conclude that causal inference is essential for interpretation in neuroimaging.
Decoding index finger position from EEG using random forests
Weichwald, Sebastian, Meyer, Timm, Schölkopf, Bernhard, Ball, Tonio, Grosse-Wentrup, Moritz
While invasively recorded brain activity is known to provide detailed information on motor commands, it is an open question at what level of detail information about positions of body parts can be decoded from non-invasively acquired signals. In this work it is shown that index finger positions can be differentiated from non-invasive electroencephalographic (EEG) recordings in healthy human subjects. Using a leave-one-subject-out cross-validation procedure, a random forest distinguished different index finger positions on a numerical keyboard above chance-level accuracy. Among the different spectral features investigated, high $\beta$-power (20-30 Hz) over contralateral sensorimotor cortex carried most information about finger position. Thus, these findings indicate that finger position is in principle decodable from non-invasive features of brain activity that generalize across individuals.
Causal interpretation rules for encoding and decoding models in neuroimaging
Weichwald, Sebastian, Meyer, Timm, Özdenizci, Ozan, Schölkopf, Bernhard, Ball, Tonio, Grosse-Wentrup, Moritz
Causal terminology is often introduced in the interpretation of encoding and decoding models trained on neuroimaging data. In this article, we investigate which causal statements are warranted and which ones are not supported by empirical evidence. We argue that the distinction between encoding and decoding models is not sufficient for this purpose: relevant features in encoding and decoding models carry a different meaning in stimulus- and in response-based experimental paradigms. We show that only encoding models in the stimulus-based setting support unambiguous causal interpretations. By combining encoding and decoding models trained on the same data, however, we obtain insights into causal relations beyond those that are implied by each individual model type. We illustrate the empirical relevance of our theoretical findings on EEG data recorded during a visuo-motor learning task.