Neural Variational Inference and Learning in Undirected Graphical Models

Kuleshov, Volodymyr, Ermon, Stefano

Neural Information Processing Systems 

Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found