Nonparametric Identifiability of Causal Representations from Unknown Interventions
–Neural Information Processing Systems
We study causal representation learning, the task of inferring latent causal variables and their causal relations from high-dimensional functions ("mixtures") of the variables. Prior work relies on weak supervision, in the form of counterfactual pre-and post-intervention views or temporal structure; places restrictive assumptions, such as linearity, on the mixing function or latent causal model; or requires partial knowledge of the generative process, such as the causal graph or intervention targets. We instead consider the general setting in which both the causal model and the mixing function are nonparametric. The learning signal takes the form of multiple datasets, or environments, arising from unknown interventions in the underlying causal model. Our goal is to identify both the ground truth latents and their causal graph up to a set of ambiguities which we show to be irresolvable from interventional data.
Neural Information Processing Systems
Feb-11-2025, 05:50:12 GMT
- Country:
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.28)
- North America > United States (0.45)
- Europe > United Kingdom
- Genre:
- Research Report (0.45)
- Technology: