Learning in Markov Random Fields using Tempered Transitions
–Neural Information Processing Systems
Markov random fields (MRF's), or undirected graphical models, provide a powerful frameworkfor modeling complex dependencies among random variables. Maximum likelihood learning in MRF's is hard due to the presence of the global normalizing constant. In this paper we consider a class of stochastic approximation algorithmsof the Robbins-Monro type that use Markov chain Monte Carlo to do approximate maximum likelihood learning. We show that using MCMC operators basedon tempered transitions enables the stochastic approximation algorithm to better explore highly multimodal distributions, which considerably improves parameter estimates in large, densely-connected MRF's. Our results on MNIST and NORB datasets demonstrate that we can successfully learn good generative models of high-dimensional, richly structured data that perform well on digit and object recognition tasks.
Neural Information Processing Systems
Dec-31-2009
- Country:
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > Massachusetts (0.14)
- Canada > Ontario
- North America
- Genre:
- Research Report > New Finding (0.48)