The Information Sieve
Steeg, Greg Ver, Galstyan, Aram
We introduce a new framework for unsupervised learning of representations based on a novel hierarchical decomposition of information. Intuitively, data is passed through a series of progressively fine-grained sieves. Each layer of the sieve recovers a single latent factor that is maximally informative about multivariate dependence in the data. The data is transformed after each pass so that the remaining unexplained information trickles down to the next layer. Ultimately, we are left with a set of latent factors explaining all the dependence in the original data and remainder information consisting of independent noise. We present a practical implementation of this framework for discrete variables and apply it to a variety of fundamental tasks in unsupervised learning including independent component analysis, lossy and lossless compression, and predicting missing values in data. The hope of finding a succinct principle that elucidates the brain's information processing abilities has often kindled interest in information-theoretic ideas (Barlow, 1989; Simoncelli & Olshausen, 2001). In machine learning, on the other hand, the past decade has witnessed a shift in focus toward expressive, hierarchical models, with successes driven by increasingly effective ways to leverage labeled data to learn rich models (Schmidhuber, 2015; Bengio et al., 2013). Information-theoretic ideas like the venerable InfoMax principle (Linsker, 1988; Bell & Sejnowski, 1995) can be and are applied in both contexts with empirical success but they do not allow us to quantify the information value of adding depth to our representations.
Jun-8-2016
- Country:
- North America > United States
- California (0.14)
- New York > New York County
- New York City (0.04)
- North America > United States
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: