Goto

Collaborating Authors

 ispn


ProductNetworks TractableProbabilisticModels

Neural Information Processing Systems

However,as already suggested, our model is not restricted to any specific intervention type or instantiation. Figure 1 (a) illustrates the performance of iSPN on theCausal Health data setfordifferent intervention types (perfect, atomic), noise terms (Gaussian, Gamma, Beta) and instantiations (Indicator Functions, Modifications). Nonetheless, it can be observed that some interventions are being modelled more precisely than others, e.g. ForEarthquakeand Cancer data sets, we use 5 different number ofsum node weights: 600, 1200, 1800, 2400 and3200. Forthesynthetic causal health data set we use 300, 600, 1000, 1500, 2000.





On the Tractability of Neural Causal Inference

Zečević, Matej, Dhami, Devendra Singh, Kersting, Kristian

arXiv.org Artificial Intelligence

Roth (1996) proved that any form of marginal inference with probabilistic graphical models (e.g. Bayesian Networks) will at least be NP-hard. Introduced and extensively investigated in the past decade, the neural probabilistic circuits known as sum-product network (SPN) offers linear time complexity. On another note, research around neural causal models (NCM) recently gained traction, demanding a tighter integration of causality for machine learning. To this end, we present a theoretical investigation of if, when, how and under what cost tractability occurs for different NCM. We prove that SPN-based causal inference is generally tractable, opposed to standard MLP-based NCM. We further introduce a new tractable NCM-class that is efficient in inference and fully expressive in terms of Pearl's Causal Hierarchy. Our comparative empirical illustration on simulations and standard benchmarks validates our theoretical proofs.