Goto

Collaborating Authors

 m-step









A EM-algorithm to fit LDF A-H (Section 2) Initialization Let null θ

Neural Information Processing Systems

Since the MPLE objective function for LDFA-H given in Eq. (9) is not guaranteed convex, an EM-algorithm may find a local minimum according to a choice of the initial value. Hence a good initialization is crucial to a successful estimation. According to the equivalence between CCA and probablistic CCA shown by A. Anonymous, it gives (r 1) (r 1) (r 1) (r 1) Lasso problem is solved by the P-GLASSO algorithm by Mazumder et al. (2010). We simulated realistic data with known cross-region connectivity as follows. Notice that the amplitudes of the top four factors dominate the others.



On the Global Convergence of (Fast) Incremental Expectation Maximization Methods

Belhal Karimi, Hoi-To Wai, Eric Moulines, Marc Lavielle

Neural Information Processing Systems

The EM algorithm is one of the most popular algorithm for inference in latent data models. The original formulation of the EM algorithm does not scale to large data set, because the whole data set is required at each iteration of the algorithm. To alleviate this problem, Neal and Hinton [1998] have proposed an incremental version of the EM (iEM) in which at each iteration the conditional expectation of the latent data (E-step) is updated only for a mini-batch of observations. Another approach has been proposed by Capp e and Moulines [2009] in which the E-step is replaced by a stochastic approximation step, closely related to stochastic gradient. In this paper, we analyze incremental and stochastic version of the EM algorithm as well as the variance reduced-version of [Chen et al., 2018] in a common unifying framework. We also introduce a new version incremental version, inspired by the SAGA algorithm by Defazio et al. [2014]. We establish non-asymptotic convergence bounds for global convergence. Numerical applications are presented in this article to illustrate our findings.