Goto

Collaborating Authors

 disentangled recognition


A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning

Neural Information Processing Systems

This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object's representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks.


Reviews: A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning

Neural Information Processing Systems

The paper presents a time-series model for high dimensional data by combining variational auto-encoder (VAE) with linear Gaussian state space model (LGSSM). The proposed model takes the latent repressentation from VAE as the output of LGSSM. The exact inference of linear Gaussian state space model via Kalman smoothing enables efficient and accurate variational inference for the overall model. To extend the temporal dynamics beyond linear dependency, the authors use a LSTM to parameterize the matrices in LGSSM. The performance of the proposed model is evaluated through bouncing ball and Pendulum experiments.


A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning

Fraccaro, Marco, Kamronn, Simon, Paquet, Ulrich, Winther, Ole

Neural Information Processing Systems

This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object's representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks. Papers published at the Neural Information Processing Systems Conference.