Auto-Encoding Sequential Monte Carlo
Le, Tuan Anh, Igl, Maximilian, Jin, Tom, Rainforth, Tom, Wood, Frank
Probabilistic machine learning [Ghahramani, 2015] allows us to model the structure and dependencies of latent variables and observational data as a joint probability distribution. Once a model is defined, we can perform inference to update our prior beliefs about latent variables in light of observed data to obtain the posterior distribution. The posterior can be used to answer any questions we might have about the latent quantities while coherently accounting for our uncertainty about the world. We introduce a method for simultaneous model learning and inference amortization [Gershman and Goodman, 2014], given an unlabeled dataset of observations. The model is specified partially, the rest being specified using a generative network whose weights are to be learned. Inference amortization refers to spending additional time before inference to obtain an amortization artifact which is used to speed up inference during test time.
May-29-2017
- Country:
- North America > United States > New York > New York County > New York City (0.04)
- Genre:
- Research Report (0.50)
- Technology: