Goto

Collaborating Authors

 Lewicki, Michael S.


A Model for Learning Variance Components of Natural Images

Neural Information Processing Systems

We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a nonlinear generalization of independent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder image structure and provides a way to learn coarse-coded, sparsedistributed representations of abstract image properties such as object location, scale, and texture.


A Model for Learning Variance Components of Natural Images

Neural Information Processing Systems

We present a hierarchical Bayesian model for learning efficient codes of higher-order structure in natural images. The model, a nonlinear generalization ofindependent component analysis, replaces the standard assumption of independence for the joint distribution of coefficients with a distribution that is adapted to the variance structure of the coefficients of an efficient image basis. This offers a novel description of higherorder imagestructure and provides a way to learn coarse-coded, sparsedistributed representationsof abstract image properties such as object location, scale, and texture.


Learning Sparse Image Codes using a Wavelet Pyramid Architecture

Neural Information Processing Systems

We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, non-oriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing natural images.


Learning Sparse Image Codes using a Wavelet Pyramid Architecture

Neural Information Processing Systems

We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, non-oriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing natural images.


Learning Sparse Image Codes using a Wavelet Pyramid Architecture

Neural Information Processing Systems

We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be self-similar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, non-oriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing naturalimages.


Unsupervised Classification with Non-Gaussian Mixture Models Using ICA

Neural Information Processing Systems

Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski Howard Hughes Medical Institute Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037, USA {tewon,lewicki,terry}Osalk.edu Abstract We present an unsupervised classification algorithm based on an ICA mixture model. The ICA mixture model assumes that the observed data can be categorized into several mutually exclusive data classes in which the components in each class are generated by a linear mixture of independent sources. The algorithm finds the independent sources, the mixing matrix for each class and also computes the class membership probability for each data point. This approach extends the Gaussian mixture model so that the classes can have non-Gaussian structure. We demonstrate that this method can learn efficient codes to represent images of natural scenes and text.


Coding Time-Varying Signals Using Sparse, Shift-Invariant Representations

Neural Information Processing Systems

A common way to represent a time series is to divide it into shortduration blocks, each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the temporal alignment of the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time series that does not require blocking the data. The algorithm finds an efficient representation by inferring the best temporal positions for functions in a kernel basis. These can have arbitrary temporal extent and are not constrained to be orthogonal.


Coding Time-Varying Signals Using Sparse, Shift-Invariant Representations

Neural Information Processing Systems

A common way to represent a time series is to divide it into shortduration blocks,each of which is then represented by a set of basis functions. A limitation of this approach, however, is that the temporal alignmentof the basis functions with the underlying structure in the time series is arbitrary. We present an algorithm for encoding a time series that does not require blocking the data. The algorithm finds an efficient representation by inferring the best temporal positions forfunctions in a kernel basis. These can have arbitrary temporal extent and are not constrained to be orthogonal.


Learning Nonlinear Overcomplete Representations for Efficient Coding

Neural Information Processing Systems

We derive a learning algorithm for inferring an overcomplete basis by viewing it as probabilistic model of the observed data. Overcomplete bases allow for better approximation of the underlying statistical density. Using a Laplacian prior on the basis coefficients removes redundancy and leads to representations that are sparse and are a nonlinear function of the data. This can be viewed as a generalization of the technique of independent component analysis and provides a method for blind source separation of fewer mixtures than sources. We demonstrate the utility of overcomplete representations on natural speech and show that compared to the traditional Fourier basis the inferred representations potentially have much greater coding efficiency.