Goto

Collaborating Authors

 granger causality


Granger Components Analysis: Unsupervised learning of latent temporal dependencies

Neural Information Processing Systems

Here the concept of Granger causality is employed to propose a new criterion for unsupervised learning that is appropriate in the case of temporally-dependent source signals. The basic idea is to identify two projections of a multivariate time series such that the Granger causality among the resulting pair of components is maximized.


A Simple yet Scalable Granger Causal Structural Learning Approach for Topological Event Sequences

Neural Information Processing Systems

Such causal graphs delineate the relations among alarms and can significantly aid engineers in identifying and rectifying faults. However, existing methods either ignore the topological relationships among devices or suffer from relatively low scalability and efficiency, failing to deliver high-quality responses in a timely manner.





Constraint- and Score-Based Nonlinear Granger Causality Discovery with Kernels

Murphy, Fiona, Benavoli, Alessio

arXiv.org Machine Learning

Granger causality (GC) [15] is a time series causal discovery framework that uses predictive modeling to identify the underlying causal structure of a time series system. Relying on the assumption that cause precedes effect, GC assesses whether including the lagged information from one time series in the autoregressive model of a second time series enhances its predictions. This improvement indicates a predictive relationship between the time series variables, where one time series provides supplemental information about the future of another time series, thereby signifying the presence of a (Granger) causal relationship. GC requires only observational data, and has been used for time series causal discovery across diverse domains, including climate science [33], political and social sciences [17], econometrics [4], and biological systems studies [13]. The original formulation of GC requires several assumptions to be satisfied for causal identifiability. In regards to the candidate time series system, it is assumed that the time series variables are stationary, and that all variables are observed (absence of latent confounders). GC was initially proposed for bivariate time series systems, but was generalised for the multivariate setting to accommodate the assumption that all relevant variables are included in the analysis [15]. Additional assumptions are made with regard to the types of causal relationships that can be identified within the time series system. GC cannot estimate a causal relationship between time series at an instantaneous time point, relying on the relationship between the lags and predicted values to determine a GC relationship.


grangersearch: An R Package for Exhaustive Granger Causality Testing with Tidyverse Integration

Korfiatis, Nikolaos

arXiv.org Machine Learning

Understanding causal relationships between time series variables is a fundamental problem in economics, finance, neuroscience, and many other fields. While true causality is philosophically complex and difficult to establish from observational data alone, Granger (1969) proposed a practical, testable notion of causality based on predictability: a variable X is said to "Granger-cause" another variable Y if past values of X contain information that helps predict Y beyond what is contained in past values of Y alone. Granger causality testing has found applications across diverse domains. In macroeconomics, Sims (1972) famously applied the technique to study money-income relationships, while Kraft and Kraft (1978) pioneered its use in energy economics. Financial market researchers including Hiemstra and Jones (1994) have extended the methodology to study price-volume dynamics, and neuroscientists have adapted Granger causality for brain connectivity analysis (Seth, Barrett, and Barnett 2015). The statistical foundations rest on vector autoregressive (V AR) models (Sims 1980), with comprehensive treatments available in Lütkepohl (2005) and discussions of causal interpretation in Peters, Janzing, and Schölkopf (2017). Despite its popularity, implementing Granger causality tests in R (R Core Team 2024) remains cumbersome for applied researchers.


Granger Components Analysis: Unsupervised learning of latent temporal dependencies

Neural Information Processing Systems

A new technique for unsupervised learning of time series data based on the notion of Granger causality is presented. The technique learns pairs of projections of a multivariate data set such that the resulting components -- driving and driven -- maximize the strength of the Granger causality between the latent time series (how strongly the past of the driving signal predicts the present of the driven signal). A coordinate descent algorithm that learns pairs of coefficient vectors in an alternating fashion is developed and shown to blindly identify the underlying sources (up to scale) on simulated vector autoregressive (VAR) data. The technique is tested on scalp electroencephalography (EEG) data from a motor imagery experiment where the resulting components lateralize with the side of the cued hand, and also on functional magnetic resonance imaging (fMRI) data, where the recovered components express previously reported resting-state networks.


Learning interaction rules from multi-animal trajectories via augmented behavioral models

Neural Information Processing Systems

Extracting the interaction rules of biological agents from movement sequences pose challenges in various domains. Granger causality is a practical framework for analyzing the interactions from observed time-series data; however, this framework ignores the structures and assumptions of the generative process in animal behaviors, which may lead to interpretational problems and sometimes erroneous assessments of causality. In this paper, we propose a new framework for learning Granger causality from multi-animal trajectories via augmented theory-based behavioral models with interpretable data-driven models. We adopt an approach for augmenting incomplete multi-agent behavioral models described by time-varying dynamical systems with neural networks. For efficient and interpretable learning, our model leverages theory-based architectures separating navigation and motion processes, and the theory-guided regularization for reliable behavioral modeling. This can provide interpretable signs of Granger-causal effects over time, i.e., when specific others cause the approach or separation. In experiments using synthetic datasets, our method achieved better performance than various baselines. We then analyzed multi-animal datasets of mice, flies, birds, and bats, which verified our method and obtained novel biological insights.


Directed Spectrum Measures Improve Latent Network Models Of Neural Populations

Neural Information Processing Systems

While some biological neural networks are well known, we expect that the vast majority remain undiscovered due to the enormous variety of tasks the brain performs. Many methods have been developed to help discover latent networks of neural populations (i.e.