Reviews: Implicit Posterior Variational Inference for Deep Gaussian Processes

Neural Information Processing Systems 

To me, this is an important paper contributing significantly to Bayesian deep leaning. Specifically, the paper bring the idea of "adversarial variational Bayes" to deep Gaussian processes, which is both novel (although someone may argue the idea already appears in variational autoencoders) and important. As pointed out by the authors, the learning of DGP is significantly harder than a shallow GP, even after introducing sparse approximation, and the field is dominated by the mean-field variational inference (which is easy to implement and works robustly in practice, but may lose predictive powers due to the mean-field assumption) and more recently stochastic MCMC such as SGHMC (which promises better results but is hard to tune in practice). All these urge us to bring new and better methods to training DGP or even general Bayesian deep learning models (such as Bayesian neural networks). The idea of "adversarial variational Bayes" or "implicit posterior" is a promising direction to go and the work in this paper demonstrates a significant step.