Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model
Kolouri, Soheil, Martin, Charles E., Rohde, Gustavo K.
Scalable generative models that capture the rich and often nonlinear distribution of highdimensional data, (i.e., image, video, and audio), play a central role in various applications of machine learning, including transfer learning [14, 25], super-resolution [16, 21], image inpainting and completion [35], and image retrieval [7], among many others. The recent generative models, including Generative Adversarial Networks (GANs) [1, 2, 11, 30] and Variational Autoencoders (VAE) [5, 15, 24] enable an unsupervised and end-to-end modeling of the high-dimensional distribution of the training data. Learning such generative models boils down to minimizing a dissimilarity measure between the data distribution and the output distribution of the generative model. To this end, and following the work of Arjovsky et al. [1] and Bousquet et al. [5] we approach the problem of generative modeling from the optimal transport point of view. The optimal transport problem [18, 34] provides a way to measure the distances between probability distributions by transporting (i.e., morphing) one distribution into another.
Apr-5-2018
- Country:
- North America > United States > Virginia > Albemarle County > Charlottesville (0.14)
- Genre:
- Research Report (0.50)
- Technology: