Uniform Consistency in Nonparametric Mixture Models
Mixture models can also be used as a flexible tool for density estimation(Genovese and Wasserman, 2000; Ghosal and Van Der Vaart, 2007; Kruijer et al., 2010) and arise in the study of empirical Bayes (Saha et al., 2020; Feng and Dicker, 2015) and deconvolution (Fan, 1991; Zhang, 1990; Moulines et al., 1997). When covariates are involved, mixtures can be used to model heterogeneous dependencies between an observation Y and some covariate(s) X, in which the conditional distribution P[Y X x] arises as a mixture of multiple (noisy) regression curves. Despite their relevance and usefulness in applications, mixture models also unfortunately represent a notoriously difficult statistical model to analyze. They fail to satisfy the usual regularity conditions of classical parametric statistics and even basic properties such as identifiability present unusual challenges. For parametric mixture models, many of these issues have been carefully addressed, to the extent that now we have optimal estimators for Gaussian mixtures (Heinrich and Kahn, 2018; Wu and Yang, 2020; Doss et al., 2020), a detailed understanding of the EM algorithm for mixtures (Balakrishnan et al., 2017; Cai et al., 2019), and efficient algorithms for mixed linear regression models (Yi et al., 2014; Kwon et al., 2021). The situation for nonparametric mixtures, however, is quite different.
Aug-31-2021