Relating Graph Neural Networks to Structural Causal Models

Zečević, Matej, Dhami, Devendra Singh, Veličković, Petar, Kersting, Kristian

arXiv.org Machine Learning 

Understanding causal interactions is central to human cognition The SCM implies a graph structure over its modelled variables, and thereby of high value to science, engineering, business, and since GNN work on graphs, a closer inspection and law (Penn and Povinelli 2007). Developmental on the relation between the two models seems reasonable psychology has shown how children explore similar to the towards progressing research in neural-causal AI. Instead of manner of scientist, all by asking "What if?" and "Why?" taking inspiration from causality's principles for improving type of questions (Gopnik 2012; Buchsbaum et al. 2012; machine learning (Mitrovic et al. 2020), we instead show Pearl and Mackenzie 2018), while artificial intelligence research how GNN can be used to perform causal computations i.e., dreams of automating the scientist's manner (Mc-how causality can emerge within neural models. To be more Carthy 1998; McCarthy and Hayes 1981; Steinruecken et al. precise on the term causal inference: we refer to the modelling 2019). Deep learning has brought optimizable universality of Pearl's Causal Hierarchy (PCH) (Bareinboim et al. in approximation which refers to the fact that for any function 2020). That is, we are given partial knowledge on the SCM there will exist a neural network that is close in approximation in the form of e.g. the (partial) causal graph and/or data to arbitrary precision (Cybenko 1989; Hornik from the different levels of the hierarchy.