LatentGAN Autoencoder: Learning Disentangled Latent Distribution

Kalwar, Sanket, Aich, Animikh, Dixit, Tanay, Chhabra, Adit

arXiv.org Artificial Intelligence 

Generative models like GAN(Goodfellow et al. 2014) and In this work, we present a new way to learn control VAE(Kingma and Welling 2014) have shown remarkable over autoencoder latent distribution with the help of AAE progress in recent years.Generative adversarial networks (Makhzani et al. 2016) which approximates posterior of the have shown state-of-the-art performance in a variety of latent distribution of autoencoder using any arbitrary prior tasks like Image-To-Image translation(Isola et al. 2018), distribution and using (Chen et al. 2016) for learning disentangled video prediction(Liang et al. 2017), Text-to-Image translation(Zhang representation. The previous work by (Wang, Peng, et al. 2017), drug discovery(Hong et al. 2019), and Ko 2019) had used a similar method of learning the latent and privacy-preserving(Shi et al. 2018). VAE has shown prior using AAE along with a perceptual loss and Information state-of-the-art performance in a variety of tasks like image maximization regularizer to train the decoder with generation(Gregor et al. 2015), semi-supervised learning(Maaløe the help of an extra discriminator.