Unsupervised Learning of Mixtures of Multiple Causes in Binary Data

Saund, Eric

Neural Information Processing Systems 

This paper presents a formulation for unsupervised learning of clusters reflectingmultiple causal structure in binary data. Unlike the standard mixture model, a multiple cause model accounts for observed databy combining assertions from many hidden causes, each of which can pertain to varying degree to any subset of the observable dimensions.A crucial issue is the mixing-function for combining beliefs from different cluster-centers in order to generate data reconstructions whose errors are minimized both during recognition and learning. We demonstrate a weakness inherent to the popular weighted sum followed by sigmoid squashing, and offer an alternative formof the nonlinearity. Results are presented demonstrating the algorithm's ability successfully to discover coherent multiple causal representat.ions of noisy test data and in images of printed characters. 1 Introduction The objective of unsupervised learning is to identify patterns or features reflecting underlying regularities in data. Single-cause techniques, including the k-means algorithm andthe standard mixture-model (Duda and Hart, 1973), represent clusters of data points sharing similar patterns of Is and Os under the assumption that each data point belongs to, or was generated by, one and only one cluster-center; output activity is constrained to sum to 1. In contrast, a multiple-cause model permits more than one cluster-center to become fully active in accounting for an observed data vector. The advantage of a multiple cause model is that a relatively small number 27 28 Saund of hidden variables can be applied combinatorially to generate a large data set.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found