Learning Disentangled Representations with Semi-Supervised Deep Generative Models

N, Siddharth, Paige, Brooks, Meent, Jan-Willem van de, Desmaison, Alban, Goodman, Noah, Kohli, Pushmeet, Wood, Frank, Torr, Philip

Neural Information Processing Systems 

Variational autoencoders (VAEs) learn representations of data by jointly training a probabilistic encoder and decoder network. Here we are interested in learning disentangled representations that encode distinct aspects of the data into separate variables. We propose to learn such representations using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. This allows us to train partially-specified models that make relatively strong assumptions about a subset of interpretable variables and rely on the flexibility of neural networks to learn representations for the remaining variables. We further define a general objective for semi-supervised learning in this model class, which can be approximated using an importance sampling procedure.