Learning unbelievable probabilities Yashar Ahmadian Department of Brain and Cognitive Science Center for Theoretical Neuroscience University of Rochester Columbia University Rochester, NY14607

Neural Information Processing Systems 

Loopy belief propagation performs approximate inference on graphical models with loops. One might hope to compensate for the approximation by adjusting model parameters. Learning algorithms for this purpose have been explored previously, and the claim has been made that every set of locally consistent marginals can arise from belief propagation run on a graphical model. On the contrary, here we show that many probability distributions have marginals that cannot be reached by belief propagation using any set of model parameters or any learning algorithm.