Auto-Encoding Sequential Monte Carlo

Le, Tuan Anh, Igl, Maximilian, Jin, Tom, Rainforth, Tom, Wood, Frank

arXiv.org Machine Learning 

Probabilistic machine learning [Ghahramani, 2015] allows us to model the structure and dependencies of latent variables and observational data as a joint probability distribution. Once a model is defined, we can perform inference to update our prior beliefs about latent variables in light of observed data to obtain the posterior distribution. The posterior can be used to answer any questions we might have about the latent quantities while coherently accounting for our uncertainty about the world. We introduce a method for simultaneous model learning and inference amortization [Gershman and Goodman, 2014], given an unlabeled dataset of observations. The model is specified partially, the rest being specified using a generative network whose weights are to be learned. Inference amortization refers to spending additional time before inference to obtain an amortization artifact which is used to speed up inference during test time.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found