Estimating the coefficients of a mixture of two linear regressions by expectation maximization

Klusowski, Jason M., Yang, Dana, Brinda, W. D.

arXiv.org Machine Learning 

The Expectation-Maximization (EM) algorithm is a widely used technique for parameter estimation. It is an iterative procedure that monotonically increases the likelihood. When the likelihood is not concave, it is well known that EM can converge to a non-global optimum. However, recent work has sidestepped the question of whether EM reaches the likelihood maximizer, instead by directly working out statistical guarantees on its loss. These 1 explorations have identified regions of initialization for which the EM estimate approaches the true parameter in probability, assuming the model is well-specified. This line of research was spurred by [1] which established general conditions for which a ball centered at the true parameter would be a basin of attraction for the population version of the EM operator. For a large enough sample size, the difference (in that ball) between the sample EM operator and the population EM operator can be bounded such that the EM estimate approaches the true parameter with high probability. That bound is the sum of two terms with distinct interpretations.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found