Goto

Collaborating Authors

 a-icp


interventions

Neural Information Processing Systems

While SE CE, it is not generally true thatSE = CE. Consider now an intervention onX0. In the example, this only happens when we set the weights, means and variances to very particular values. Here we present a slightly adapted version of Invariant Causal Prediction [27]. Under this approach, the complexity oftesting asingle setofpredictors ofsize k is the cost of performing a least-squares regression and computing the residuals (O(k2N)) and the cost of performing the t-test and F-test over each split of thee environments (O(eN)).



Active Invariant Causal Prediction: Experiment Selection through Stability

Neural Information Processing Systems

A fundamental difficulty of causal learning is that causal models can generally not be fully identified based on observational data only. Interventional data, that is, data originating from different experimental environments, improves identifiability. However, the improvement depends critically on the target and nature of the interventions carried out in each experiment. Since in real applications experiments tend to be costly, there is a need to perform the right interventions such that as few as possible are required. In this work we propose a new active learning (i.e.




We hope to have correctly understood your questions, and will try to exhaustively address all your comments

Neural Information Processing Systems

We would like to thank you for your time and valuable feedback. Thank you for helping us to improve our manuscript! We hope to have correctly understood your questions, and will try to exhaustively address all your comments. We agree to be more specific as to what we mean by "other types of interventions" in footnote 1, p. 3, and will We thank reviewer 4 for the additional comments on the manuscript. Combining this method with A-ICP is interesting future work.


Active Invariant Causal Prediction: Experiment Selection through Stability

Neural Information Processing Systems

A fundamental difficulty of causal learning is that causal models can generally not be fully identified based on observational data only. Interventional data, that is, data originating from different experimental environments, improves identifiability. However, the improvement depends critically on the target and nature of the interventions carried out in each experiment. Since in real applications experiments tend to be costly, there is a need to perform the right interventions such that as few as possible are required. In this work we propose a new active learning (i.e.


Active Invariant Causal Prediction: Experiment Selection through Stability

Gamella, Juan L, Heinze-Deml, Christina

arXiv.org Machine Learning

A fundamental difficulty of causal learning is that causal models can generally not be fully identified based on observational data only. Interventional data, that is, data originating from different experimental environments, improves identifiability. However, the improvement depends critically on the target and nature of the interventions carried out in each experiment. Since in real applications experiments tend to be costly, there is a need to perform the right interventions such that as few as possible are required. In this work we propose a new active learning (i.e. experiment selection) framework (A-ICP) based on Invariant Causal Prediction (ICP) (Peters et al., 2016). For general structural causal models, we characterize the effect of interventions on so-called stable sets, a notion introduced by (Pfister et al., 2019). We leverage these results to propose several intervention selection policies for A-ICP which quickly reveal the direct causes of a response variable in the causal graph while maintaining the error control inherent in ICP. Empirically, we analyze the performance of the proposed policies in both population and finite-regime experiments.