Goto

Collaborating Authors

 Linsker, Ralph


Deriving Receptive Fields Using an Optimal Encoding Criterion

Neural Information Processing Systems

In unsupervised network learning, the development of the connection weights is influenced by statistical properties of the ensemble of input vectors, rather than by the degree of mismatch between the network's output and some'desired' output. An implicit goal of such learning is that the network should transform the input so that salient features present in the input are represented at the output in a 953 954 Linsker more useful form. This is often done by reducing the input dimensionality in a way that preserves the high-variance components of the input (e.g., principal component analysis, Kohonen feature maps). The principle of maximum information preservation ('infomax') is an unsupervised learning strategy that states (Linsker 1988): From a set of allowed input-output mappings (e.g., parametrized by the connection weights), choose a mapping that maximizes the (ensemble-averaged) Shannon information that the output vector conveys about the input vector, in the presence of noise.


Deriving Receptive Fields Using an Optimal Encoding Criterion

Neural Information Processing Systems

In unsupervised network learning, the development of the connection weights is influenced by statistical properties of the ensemble of input vectors, rather than by the degree of mismatch between the network's output and some'desired' output. An implicit goal of such learning is that the network should transform the input so that salient features present in the input are represented at the output in a 953 954 Linsker more useful form. This is often done by reducing the input dimensionality in a way that preserves the high-variance components of the input (e.g., principal component analysis, Kohonen feature maps). The principle of maximum information preservation ('infomax') is an unsupervised learning strategy that states (Linsker 1988): From a set of allowed input-output mappings (e.g., parametrized by the connection weights), choose a mapping that maximizes the (ensemble-averaged) Shannon information that the output vector conveys about the input vector, in the presence of noise.


An Application of the Principle of Maximum Information Preservation to Linear Systems

Neural Information Processing Systems

I have previously proposed [Linsker, 1987, 1988] a principle of "maximum information preservation," also called the "infomax" principle, that may account for certain aspects of the organization of a layered perceptual network. The principle applies to a layer L of cells (which may be the input layer or an intermediate layer of the network) that provides input to a next layer M. The mapping of the input signal vector L onto an output signal vector M, f:L M, is characterized by a conditional probability density function ("pdf") p(MI L).


An Application of the Principle of Maximum Information Preservation to Linear Systems

Neural Information Processing Systems

I have previously proposed [Linsker, 1987, 1988] a principle of "maximum information preservation," also called the "infomax" principle, that may account for certain aspects of the organization of a layered perceptual network. The principle applies to a layer L of cells (which may be the input layer or an intermediate layer of the network) that provides input to a next layer M. The mapping of the input signal vector L onto an output signal vector M, f:L M, is characterized by a conditional probability density function ("pdf") p(MI L).


An Application of the Principle of Maximum Information Preservation to Linear Systems

Neural Information Processing Systems

I have previously proposed [Linsker, 1987, 1988] a principle of "maximum information preservation," also called the "infomax" principle, that may account for certain aspects of the organization of a layered perceptual network. The principle applies to a layer L of cells (which may be the input layer or an intermediate layer of the network) that provides input to a next layer M. The mapping of the input signal vector L onto an output signal vector M, f:L M, is characterized by a conditional probability density function ("pdf") p(MI L).


Towards an Organizing Principle for a Layered Perceptual Network

Neural Information Processing Systems

This principle of "maximum information preservation" states that the signal transformation that is to be realized at each stage is one that maximizes the information that the output signal values (from that stage) convey about the input signals values (to that stage), subject to certain constraints and in the presence of processing noise. The quantity being maximized is a Shannon information rate. I provide motivation for this principle and -- for some simple model cases -- derive some of its consequences, discuss an algorithmic implementation, and show how the principle may lead to biologically relevant neural architectural features such as topographic maps, map distortions, orientation selectivity, and extraction of spatial and temporal signal correlations. A possible connection between this information-theoretic principle and a principle of minimum entropy production in nonequilibrium thermodynamics is suggested. Introduction This paper describes some properties of a proposed information-theoretic organizing principle for the development of a layered perceptual network. The purpose of this paper is to provide an intuitive and qualitative understanding of how the principle leads to specific feature-analyzing properties and signal transformations in some simple model cases. More detailed analysis is required in order to apply the principle to cases involving more realistic patterns of signaling activity as well as specific constraints on network connectivity. This section gives a brief summary of the results that motivated the formulation of the organizing principle, which I call the principle of "maximum information preservation." In later sections the principle is stated and its consequences studied.


Towards an Organizing Principle for a Layered Perceptual Network

Neural Information Processing Systems

TOWARDS AN ORGANIZING PRINCIPLE FOR A LAYERED PERCEPTUAL NETWORK Ralph Linsker IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598 Abstract An information-theoretic optimization principle is proposed for the development of each processing stage of a multilayered perceptual network. This principle of "maximum information preservation" states that the signal transformation that is to be realized at each stage is one that maximizes the information that the output signal values (from that stage) convey about the input signals values (to that stage), subject to certain constraints and in the presence of processing noise. The quantity being maximized is a Shannon information rate. I provide motivation for this principle and -- for some simple model cases -- derive some of its consequences, discuss an algorithmic implementation, and show how the principle may lead to biologically relevant neural architectural features such as topographic maps, map distortions, orientation selectivity, and extraction of spatial and temporal signal correlations. A possible connection between this information-theoretic principle and a principle of minimum entropy production in nonequilibrium thermodynamics is suggested. Introduction This paper describes some properties of a proposed information-theoretic organizing principle for the development of a layered perceptual network.