Goto

Collaborating Authors

 arrowhead





World's oldest poison-tipped arrow discovered in South Africa

Popular Science

Science Archaeology World's oldest poison-tipped arrow discovered in South Africa The 60,000-year-old relic contains traces of a toxic onion. Breakthroughs, discoveries, and DIY tips sent every weekday. For thousands of years, hunters around the world have employed poison-tipped arrows to assist in taking down prey. For example, the curare plant poisons used by South and Central American hunters paralyzes the respiratory system. Meanwhile, inhabitants of the Kalahari Desert have relied on the toxins harvested from beetle larvae .



Characterization and Learning of Causal Graphs with Latent Confounders and Post-treatment Selection from Interventional Data

Luo, Gongxu, Li, Loka, Chen, Guangyi, Dai, Haoyue, Zhang, Kun

arXiv.org Artificial Intelligence

Interventional causal discovery seeks to identify causal relations by leveraging distributional changes introduced by interventions, even in the presence of latent confounders. Beyond the spurious dependencies induced by latent confounders, we highlight a common yet often overlooked challenge in the problem due to post-treatment selection, in which samples are selectively included in datasets after interventions. This fundamental challenge widely exists in biological studies; for example, in gene expression analysis, both observational and interventional samples are retained only if they meet quality control criteria (e.g., highly active cells). Neglecting post-treatment selection may introduce spurious dependencies and distributional changes under interventions, which can mimic causal responses, thereby distorting causal discovery results and challenging existing causal formulations. To address this, we introduce a novel causal formulation that explicitly models post-treatment selection and reveals how its differential reactions to interventions can distinguish causal relations from selection patterns, allowing us to go beyond traditional equivalence classes toward the underlying true causal structure. We then characterize its Markov properties and propose a Fine-grained Interventional equivalence class, named FI-Markov equivalence, represented by a new graphical diagram, F-PAG. Finally, we develop a provably sound and complete algorithm, F-FCI, to identify causal relations, latent confounders, and post-treatment selection up to $\mathcal{FI}$-Markov equivalence, using both observational and interventional data. Experimental results on synthetic and real-world datasets demonstrate that our method recovers causal relations despite the presence of both selection and latent confounders.



Appendix A Removable Variables In this section, we first prove the proposed graphical representation for a removable variable in a MAG

Neural Information Processing Systems

(Theorem 1). A.1 Graphical representation Theorem 1. V ertex X is removable in a MAG M over the variables V, if and only if 1. for any Y 2 Adj ( X) and Z 2 Ch ( X) [ N ( X) \{ Y }, Y and Z are adjacent, and 2. Let H denote the induced subgraph of M over V \{ X } . Since X is removable in M, by definition of removability, ( Y? M, Lemma 6 implies that u is not m-connecting relative to W in H . (: Lemma 6 implies that u is not m-connecting relative to W in M . This contradiction proves that X cannot have a descendant in { Y,Z }[ W, which implies that X blocks u in M .


Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge

Neural Information Processing Systems

When BK is available in addition to observational data, a fundamental problem is: what causal relations are identifiable in the presence of latent variables? This problem is fundamental for its implication on the maximally identifiable causal knowledge with the observational data and BK.


Causal Discovery over High-Dimensional Structured Hypothesis Spaces with Causal Graph Partitioning

Shah, Ashka, DePavia, Adela, Hudson, Nathaniel, Foster, Ian, Stevens, Rick

arXiv.org Artificial Intelligence

The aim in many sciences is to understand the mechanisms that underlie the observed distribution of variables, starting from a set of initial hypotheses. Causal discovery allows us to infer mechanisms as sets of cause and effect relationships in a generalized way -- without necessarily tailoring to a specific domain. Causal discovery algorithms search over a structured hypothesis space, defined by the set of directed acyclic graphs, to find the graph that best explains the data. For high-dimensional problems, however, this search becomes intractable and scalable algorithms for causal discovery are needed to bridge the gap. In this paper, we define a novel causal graph partition that allows for divide-and-conquer causal discovery with theoretical guarantees. We leverage the idea of a superstructure -- a set of learned or existing candidate hypotheses -- to partition the search space. We prove under certain assumptions that learning with a causal graph partition always yields the Markov Equivalence Class of the true causal graph. We show our algorithm achieves comparable accuracy and a faster time to solution for biologically-tuned synthetic networks and networks up to ${10^4}$ variables. This makes our method applicable to gene regulatory network inference and other domains with high-dimensional structured hypothesis spaces.