Fully Bayesian Autoencoders with Latent Sparse Gaussian Processes
Tran, Ba-Hien, Shahbaba, Babak, Mandt, Stephan, Filippone, Maurizio
–arXiv.org Artificial Intelligence
Autoencoders and their variants are among the most widely used models in representation learning and generative modeling. However, autoencoder-based models usually assume that the learned representations are i.i.d. and fail to capture the correlations between the data samples. To address this issue, we propose a novel Sparse Gaussian Process Bayesian Autoencoder (SGPBAE) model in which we impose fully Bayesian sparse Gaussian Process priors on the latent space of a Bayesian Autoencoder. We perform posterior estimation for this model via stochastic gradient Hamiltonian Monte Carlo. We evaluate our approach qualitatively and quantitatively on a wide range of representation learning and generative modeling tasks and show that our approach consistently outperforms multiple alternatives relying on Variational Autoencoders.
arXiv.org Artificial Intelligence
Feb-9-2023
- Country:
- Africa > Ethiopia
- Addis Ababa > Addis Ababa (0.04)
- Asia
- China > Beijing
- Beijing (0.04)
- Japan > Kyūshū & Okinawa
- Okinawa (0.04)
- Middle East > Jordan (0.04)
- China > Beijing
- Europe
- France (0.04)
- Spain > Canary Islands (0.04)
- North America
- Canada > British Columbia
- United States
- Arizona > Maricopa County
- Scottsdale (0.04)
- California
- Orange County > Irvine (0.04)
- San Diego County > San Diego (0.04)
- Virginia > Arlington County
- Arlington (0.04)
- Arizona > Maricopa County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Africa > Ethiopia
- Genre:
- Research Report (1.00)