Shape your Space: A Gaussian Mixture Regularization Approach to Deterministic Autoencoders
–Neural Information Processing Systems
Variational Autoencoders (VAEs) are powerful probabilistic models to learn representations of complex data distributions. One important limitation of VAEs is the strong prior assumption that latent representations learned by the model follow a simple uni-modal Gaussian distribution. Further, the variational training procedure poses considerable practical challenges. Recently proposed regularized autoencoders offer a deterministic autoencoding framework, that simplifies the original VAE objective and is significantly easier to train. Since these models only provide weak control over the learned latent distribution, they require an ex-post density estimation step to generate samples comparable to those of VAEs.
Neural Information Processing Systems
Oct-10-2024, 02:28:18 GMT
- Technology: