Robust High Dimensional Expectation Maximization Algorithm via Trimmed Hard Thresholding

Wang, Di, Guo, Xiangyu, Li, Shi, Xu, Jinhui

arXiv.org Machine Learning 

Although EM algorithm is well-known to converge to an empirically good local estimator (Wu et al., 1983), finite sample statistical guarantees for its performance have not been established until recent studies (Balakrishnan et al., 2017b)(Zhu et al., 2017),(Wang et al., 2015),(Yi and Caramanis, 2015). Specifically, the first local convergence theory and finite sample statistical rate of convergence for the classical EM and its gradient ascent variant (gradient EM) were established in (Balakrishnan et al., 2017b). Later, (Wang et al., 2015) extended the classical EM and gradient EM algorithms to the high dimensional sparse setting, and the key idea in their methods is an additional truncation step after the M-step, which can exploit the intrinsic sparse structure of the high dimensional latent variable models. Later on, (Yi and Caramanis, 2015) also studied the high dimensional sparse EM algorithm and proposed a method which uses a regularized M-estimator in the M-step. Recently, (Zhu et al., 2017) considered the computational issue of the previous methods of the problem in high dimensional sparse case.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found