Sejnowski, Terrence J.
Edges are the 'Independent Components' of Natural Scenes.
Bell, Anthony J., Sejnowski, Terrence J.
Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that nonlinear'infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen & Field (1996). In addition, the outputs of these filters are as independent as possible, since the infomax network is able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA).
Bayesian Unsupervised Learning of Higher Order Structure
Lewicki, Michael S., Sejnowski, Terrence J.
Many real world patterns have a hierarchical underlying structure in which simple features have a higher order structure among themselves. Because these relationships are often statistical in nature, it is natural to view the process of discovering such structures as a statistical inference problem in which a hierarchical model is fit to data. Hierarchical statistical structure can be conveniently represented with Bayesian belief networks (Pearl, 1988; Lauritzen and Spiegelhalter, 1988; Neal, 1992). These 530 M. S. Lewicki and T. 1. Sejnowski models are powerful, because they can capture complex statistical relationships among the data variables, and also mathematically convenient, because they allow efficient computation of the joint probability for any given set of model parameters.
Learning Decision Theoretic Utilities through Reinforcement Learning
Stensmo, Magnus, Sejnowski, Terrence J.
Probability models can be used to predict outcomes and compensate for missing data, but even a perfect model cannot be used to make decisions unless the utility of the outcomes, or preferences between them, are also provided. This arises in many real-world problems, such as medical diagnosis, where the cost of the test as well as the expected improvement in the outcome must be considered. Relatively little work has been done on learning the utilities of outcomes for optimal decision making. In this paper, we show how temporal-difference reinforcement learning (TO(Aยป can be used to determine decision theoretic utilities within the context of a mixture model and apply this new approach to a problem in medical diagnosis. TO(A) learning of utilities reduces the number of tests that have to be done to achieve the same level of performance compared with the probability model alone, which results in significant cost savings and increased efficiency.
Edges are the 'Independent Components' of Natural Scenes.
Bell, Anthony J., Sejnowski, Terrence J.
Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that nonlinear'infomax', when applied to an ensemble of natural scenes, produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen & Field (1996). In addition, the outputs of these filters are as independent as possible, since the infomax network is able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA).
Dynamic Features for Visual Speechreading: A Systematic Comparison
Gray, Michael S., Movellan, Javier R., Sejnowski, Terrence J.
Humans use visual as well as auditory speech signals to recognize spoken words. A variety of systems have been investigated for performing this task. The main purpose of this research was to systematically compare the performance of a range of dynamic visual features on a speechreading task. We have found that normalization of images to eliminate variation due to translation, scale, and planar rotation yielded substantial improvements in generalization performance regardless of the visual representation used. In addition, the dynamic information in the difference between successive frames yielded better performance than optical-flow based approaches, and compression by local low-pass filtering worked surprisingly better than global principal components analysis (PCA). These results are examined and possible explanations are explored.
Learning Decision Theoretic Utilities through Reinforcement Learning
Stensmo, Magnus, Sejnowski, Terrence J.
Probability models can be used to predict outcomes and compensate for missing data, but even a perfect model cannot be used to make decisions unless the utility of the outcomes, or preferences between them, are also provided. This arises in many real-world problems, such as medical diagnosis, where the cost of the test as well as the expected improvement in the outcome must be considered. Relatively little work has been done on learning the utilities of outcomes for optimal decision making. In this paper, we show how temporal-difference reinforcement learning (TO(Aยป can be used to determine decision theoretic utilities within the context of a mixture model and apply this new approach to a problem in medical diagnosis. TO(A) learning of utilities reduces the number of tests that have to be done to achieve the same level of performance compared with the probability model alone, which results in significant cost savings and increased efficiency.
Bayesian Unsupervised Learning of Higher Order Structure
Lewicki, Michael S., Sejnowski, Terrence J.
Many real world patterns have a hierarchical underlying structure in which simple features have a higher order structure among themselves. Because these relationships are often statistical in nature, it is natural to view the process of discovering such structures as a statistical inference problem in which a hierarchical model is fit to data. Hierarchical statistical structure can be conveniently represented with Bayesian belief networks (Pearl, 1988; Lauritzen and Spiegelhalter, 1988; Neal, 1992). These 530 M.S. Lewicki and T. 1. Sejnowski models are powerful, because they can capture complex statistical relationships among the data variables, and also mathematically convenient, because they allow efficient computation of the joint probability for any given set of model parameters. The joint probability density of a network of binary states is given by a product of conditional probabilities (1) where W is the weight matrix that parameterizes the model. Note that the probability ofan individual state Si depends only on its parents.
Edges are the 'Independent Components' of Natural Scenes.
Bell, Anthony J., Sejnowski, Terrence J.
Field (1994) has suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representationof natural scenes, and Barlow (1989) has reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that nonlinear'infomax', when applied to an ensemble of natural scenes,produces sets of visual filters that are localised and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximisation network of Olshausen & Field (1996). In addition, the outputs of these filters are as independent as possible, since the infomax networkis able to perform Independent Components Analysis (ICA). We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble thereceptive fields of simple cells in visual cortex, which suggests that these neurons form an information-theoretic coordinate system for images. 1 Introduction. Both the classic experiments of Rubel & Wiesel [8] on neurons in visual cortex, and several decadesof theorising about feature detection in vision, have left open the question most succinctly phrased by Barlow "Why do we have edge detectors?" That is: are there any coding principles which would predict the formation of localised, oriented receptive 832 A.1.
Cholinergic Modulation Preserves Spike Timing Under Physiologically Realistic Fluctuating Input
Tang, Akaysha C., Bartels, Andreas M., Sejnowski, Terrence J.
Recently, there has been a vigorous debate concerning the nature of neural coding (Rieke et al. 1996; Stevens and Zador 1995; Shadlen and Newsome 1994). The prevailing viewhas been that the mean firing rate conveys all information about the sensory stimulus in a spike train and the precise timing of the individual spikes is noise. This belief is, in part, based on a lack of correlation between the precise timing ofthe spikes and the sensory qualities of the stimulus under study, particularly, on a lack of spike timing repeatability when identical stimulation is delivered. This view has been challenged by a number of recent studies, in which highly repeatable temporal patterns of spikes can be observed both in vivo (Bair and Koch 1996; Abeles et al. 1993) and in vitro (Mainen and Sejnowski 1994). Furthermore, application ofinformation theory to the coding problem in the frog and house fly (Bialek et al. 1991; Bialek and Rieke 1992) suggested that additional information could be extracted from spike timing. In the absence of direct evidence for a timing code in the cerebral cortex, the role of spike timing in neural coding remains controversial.
Bayesian Unsupervised Learning of Higher Order Structure
Lewicki, Michael S., Sejnowski, Terrence J.
Many real world patterns have a hierarchical underlying structure in which simple features have a higher order structure among themselves. Because these relationships are often statistical in nature, it is natural to view the process of discovering such structures as a statistical inference problem in which a hierarchical model is fit to data. Hierarchical statistical structure can be conveniently represented with Bayesian belief networks (Pearl, 1988; Lauritzen and Spiegelhalter, 1988; Neal, 1992). These 530 M. S. Lewicki and T. 1. Sejnowski models are powerful, because they can capture complex statistical relationships among the data variables, and also mathematically convenient, because they allow efficient computation of the joint probability for any given set of model parameters.