Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex
–Neural Information Processing Systems
In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data. We apply this method to latent Dirichlet allocation in an online minibatch setting, and demonstrate that it achieves substantial performance improvements over the state of the art online variational Bayesian methods.
Neural Information Processing Systems
Mar-13-2024, 15:36:53 GMT