Reviews: Stochastic Expectation Maximization with Variance Reduction

Neural Information Processing Systems 

It builds on the classical stochastic EM of Cappé and Moulines (2009), with an extra variance reduction term in the iteration formula. This variance reduction technique is inspired from the stochastic gradient descent literature (in particular, algorithms developed in Le Roux et al 2012, Johnson and Zhang 2013, Defazio et al 2014). After setting up the background to present previous results in a unified and clear way, the authors present their algorithm and show two theoretical properties of it: a local convergence rate, and a global convergence property. In the last section, they compare their new algorithm to several state-of-the-art methods, on a Gaussian mixture toy example, and on a probabilistic latent semantic analysis problem.