Dendrogram of mixing measures: Hierarchical clustering and model selection for finite mixture models

Do, Dat, Do, Linh, McKinley, Scott A., Terhorst, Jonathan, Nguyen, XuanLong

arXiv.org Machine Learning 

In modern data analysis, it is often useful to reduce the complexity of a large dataset by clustering the observations into a small and interpretable collection of subpopulations. Broadly speaking, there are two major approaches. In "model-based" clustering, the data are assumed to be generated by a (usually small) collection of simple probability distributions such as normal distributions, and clusters are inferred by fitting a probabilistic mixture model. Because of their transparent probabilistic assumptions, the statistical properties of mixture models are well-understood. In particular, if there is no model misspecification, i.e., the data truly come from a mixture distribution, then the subpopulations can be consistently estimated. Unfortunately, this appealing asymptotic guarantee is somewhat at odds with what is often observed in practice, whereby mixture models fitted to complex datasets often return an uninterpretably large number of components, many of which are quite similar to each other. The tendency of mixture models to overfit on real data leads many analysts to employ "model-free" clustering methods instead. A well-known example is hierarchical clustering, which organizes the data into a nested sequence of partitions at different resolutions. It is particularly useful for data exploration as it does not require fixing a number of subpopulations a priori and can be visualized using a dendrogram.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found