Goto

Collaborating Authors

 collider



A Broader impact

Neural Information Processing Systems

It is essential to approach the interpretation of our algorithm's results with caution and subject them to critical evaluation. In this section, we provide the definition of partial ancestral graphs (P AGs). A P AG shares the same adjacencies as any MAG in the observational equivalence class of MAGs. Section 2. For any v W, let G In this section, we derive the causal effect for the SMCM in Figure 3(top), i.e., (6), as well as prove D.1 Proof of (6) First, using the law of total probability, we have P(y |do (t = t)) = null Rule 3a, (c) follows from Rule 1, and (g) follows from Rule 2. D.2 Proof of Theorem 3.1 Lemma 1. Suppose Assumptions 1 to 3 hold. Given this claim, Theorem 3.1 follows from Tian and Pearl [2002, Theorem 4].






Constraint- and Score-Based Nonlinear Granger Causality Discovery with Kernels

Murphy, Fiona, Benavoli, Alessio

arXiv.org Machine Learning

Granger causality (GC) [15] is a time series causal discovery framework that uses predictive modeling to identify the underlying causal structure of a time series system. Relying on the assumption that cause precedes effect, GC assesses whether including the lagged information from one time series in the autoregressive model of a second time series enhances its predictions. This improvement indicates a predictive relationship between the time series variables, where one time series provides supplemental information about the future of another time series, thereby signifying the presence of a (Granger) causal relationship. GC requires only observational data, and has been used for time series causal discovery across diverse domains, including climate science [33], political and social sciences [17], econometrics [4], and biological systems studies [13]. The original formulation of GC requires several assumptions to be satisfied for causal identifiability. In regards to the candidate time series system, it is assumed that the time series variables are stationary, and that all variables are observed (absence of latent confounders). GC was initially proposed for bivariate time series systems, but was generalised for the multivariate setting to accommodate the assumption that all relevant variables are included in the analysis [15]. Additional assumptions are made with regard to the types of causal relationships that can be identified within the time series system. GC cannot estimate a causal relationship between time series at an instantaneous time point, relying on the relationship between the lags and predicted values to determine a GC relationship.


Inside the wild experiments physicists would do with zero limits

New Scientist

From a particle smasher encircling the moon to an "impossible" laser, five scientists reveal the experiments they would run in a world powered purely by imagination In physics, breakthroughs are rare. Experiments are slow, expensive and often end up refining, rather than rewriting, our understanding of the universe. But what if the only constraint on scientific ambition were imagination? We asked five physicists to describe the kind of experiment they would do if they didn't have to worry about budgets, engineering limitations or political realities. Not because we expect any of it to happen soon - though in a few cases, momentum is building - but because it is revealing to see where their minds go when the usual boundaries are stripped away. One researcher wants to launch radio telescopes deep into space to probe dark matter with cosmic energy flashes.


Higher-Order Causal Structure Learning with Additive Models

Enouen, James, Zheng, Yujia, Ng, Ignavier, Liu, Yan, Zhang, Kun

arXiv.org Machine Learning

Causal structure learning has long been the central task of inferring causal insights from data. Despite the abundance of real-world processes exhibiting higher-order mechanisms, however, an explicit treatment of interactions in causal discovery has received little attention. In this work, we focus on extending the causal additive model (CAM) to additive models with higher-order interactions. This second level of modularity we introduce to the structure learning problem is most easily represented by a directed acyclic hypergraph which extends the DAG. We introduce the necessary definitions and theoretical tools to handle the novel structure we introduce and then provide identifiability results for the hyper DAG, extending the typical Markov equivalence classes. We next provide insights into why learning the more complex hypergraph structure may actually lead to better empirical results. In particular, more restrictive assumptions like CAM correspond to easier-to-learn hyper DAGs and better finite sample complexity. We finally develop an extension of the greedy CAM algorithm which can handle the more complex hyper DAG search space and demonstrate its empirical usefulness in synthetic experiments.


Efficient Latent Variable Causal Discovery: Combining Score Search and Targeted Testing

Ramsey, Joseph, Andrews, Bryan, Spirtes, Peter

arXiv.org Artificial Intelligence

Learning causal structure from observational data is especially challenging when latent variables or selection bias are present. The Fast Causal Inference (FCI) algorithm addresses this setting but performs exhaustive conditional independence tests across many subsets, often leading to spurious independences, missing or extra edges, and unreliable orientations. We present a family of score-guided mixed-strategy causal search algorithms that extend this framework. First, we introduce BOSS-FCI and GRaSP-FCI, variants of GFCI (Greedy Fast Causal Inference) that substitute BOSS (Best Order Score Search) or GRaSP (Greedy Relaxations of Sparsest Permutation) for FGES (Fast Greedy Equivalence Search), preserving correctness while trading off scalability and conservativeness. Second, we develop FCI Targeted-Testing (FCIT), a novel hybrid method that replaces exhaustive testing with targeted, score-informed tests guided by BOSS. FCIT guarantees well-formed PAGs and achieves higher precision and efficiency across sample sizes. Finally, we propose a lightweight heuristic, LV-Dumb (Latent Variable "Dumb"), which returns the PAG of the BOSS DAG (Directed Acyclic Graph). Though not strictly sound for latent confounding, LV-Dumb often matches FCIT's accuracy while running substantially faster. Simulations and real-data analyses show that BOSS-FCI and GRaSP-FCI provide robust baselines, FCIT yields the best balance of precision and reliability, and LV-Dumb offers a fast, near-equivalent alternative. Together, these methods demonstrate that targeted and score-guided strategies can dramatically improve the efficiency and correctness of latent-variable causal discovery.