Plotting

 Pouget, Alexandre


Neural Basis of Object-Centered Representations

Neural Information Processing Systems

We present a neural model that can perform eye movements to a particular side of an object regardless of the position and orientation ofthe object in space, a generalization of a task which has been recently used by Olson and Gettner [4] to investigate the neural structureof object-centered representations. Our model uses an intermediate representation in which units have oculocentric receptive fields-just like collicular neurons-whose gain is modulated by the side of the object to which the movement is directed, as well as the orientation of the object. We show that these gain modulations are consistent with Olson and Gettner's single cell recordings in the supplementary eye field. This demonstrates that it is possible to perform an object-centered task without a representation involving anobject-centered map, viz., without neurons whose receptive fields are defined in object-centered coordinates. We also show that the same approach can account for object-centered neglect, a situation inwhich patients with a right parietal lesion neglect the left side of objects regardless of the orientation of the objects. Several authors have argued that tasks such as object recognition [3] and manipulation [4]are easier to perform if the object is represented in object-centered coordinates, arepresentation in which the subparts of the object are encoded with respect to a frame of reference centered on the object. Compelling evidence for the existence of such representations in the cortex comes from experiments on hemineglect-a neurological syndrome resulting from unilateral lesions of the parietal cortex such that a right lesion, for example, leads patients to ignore stimuli located on the left side of their egocentric space. Recently, Driver et al. (1994) showed that the deficit can also be object-centered.


Neural Basis of Object-Centered Representations

Neural Information Processing Systems

We present a neural model that can perform eye movements to a particular side of an object regardless of the position and orientation of the object in space, a generalization of a task which has been recently used by Olson and Gettner [4] to investigate the neural structure of object-centered representations. Our model uses an intermediate representation in which units have oculocentric receptive fields-just like collicular neurons-whose gain is modulated by the side of the object to which the movement is directed, as well as the orientation of the object. We show that these gain modulations are consistent with Olson and Gettner's single cell recordings in the supplementary eye field. This demonstrates that it is possible to perform an object-centered task without a representation involving an object-centered map, viz., without neurons whose receptive fields are defined in object-centered coordinates. We also show that the same approach can account for object-centered neglect, a situation in which patients with a right parietal lesion neglect the left side of objects regardless of the orientation of the objects. Several authors have argued that tasks such as object recognition [3] and manipulation [4] are easier to perform if the object is represented in object-centered coordinates, a representation in which the subparts of the object are encoded with respect to a frame of reference centered on the object. Compelling evidence for the existence of such representations in the cortex comes from experiments on hemineglect-a neurological syndrome resulting from unilateral lesions of the parietal cortex such that a right lesion, for example, leads patients to ignore stimuli located on the left side of their egocentric space. Recently, Driver et al. (1994) showed that the deficit can also be object-centered.


Probabilistic Interpretation of Population Codes

Neural Information Processing Systems

We present a theoretical framework for population codes which generalizes naturally to the important case where the population provides information about a whole probability distribution over an underlying quantity rather than just a single value. We use the framework to analyze two existing models, and to suggest and evaluate a third model for encoding such probability distributions.


Statistically Efficient Estimations Using Cortical Lateral Connections

Neural Information Processing Systems

Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient, i.e., the variance of the estimate is much larger than the smallest possible variance, or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform these estimation in an optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.


Probabilistic Interpretation of Population Codes

Neural Information Processing Systems

We present a theoretical framework for population codes which generalizes naturally to the important case where the population provides information about a whole probability distribution over an underlying quantity rather than just a single value. We use the framework to analyze two existing models, and to suggest and evaluate a third model for encoding such probability distributions. 1 Introduction Population codes, where information is represented in the activities of whole populations ofunits, are ubiquitous in the brain. There has been substantial work on how animals should and/or actually do extract information about the underlying encoded quantity.


Selective Integration: A Model for Disparity Estimation

Neural Information Processing Systems

Local disparity information is often sparse and noisy, which creates two conflicting demands when estimating disparity in an image region: the need to spatially average to get an accurate estimate, and the problem of not averaging over discontinuities. We have developed a network model of disparity estimation based on disparityselective neurons, such as those found in the early stages of processing in visual cortex. The model can accurately estimate multiple disparities in a region, which may be caused by transparency or occlusion, in real images and random-dot stereograms. The use of a selection mechanism to selectively integrate reliable local disparity estimates results in superior performance compared to standard back-propagation and cross-correlation approaches. In addition, the representations learned with this selection mechanism are consistent with recent neurophysiological results of von der Heydt, Zhou, Friedman, and Poggio [8] for cells in cortical visual area V2. Combining multi-scale biologically-plausible image processing with the power of the mixture-of-experts learning algorithm represents a promising approach that yields both high performance and new insights into visual system function.


Probabilistic Interpretation of Population Codes

Neural Information Processing Systems

We present a theoretical framework for population codes which generalizes naturally to the important case where the population provides information about a whole probability distribution over an underlying quantity rather than just a single value. We use the framework to analyze two existing models, and to suggest and evaluate a third model for encoding such probability distributions.


Selective Integration: A Model for Disparity Estimation

Neural Information Processing Systems

Local disparity information is often sparse and noisy, which creates two conflicting demands when estimating disparity in an image region: theneed to spatially average to get an accurate estimate, and the problem of not averaging over discontinuities. We have developed anetwork model of disparity estimation based on disparityselective neurons,such as those found in the early stages of processing in visual cortex. The model can accurately estimate multiple disparities in a region, which may be caused by transparency or occlusion, inreal images and random-dot stereograms. The use of a selection mechanism to selectively integrate reliable local disparity estimates results in superior performance compared to standard back-propagation and cross-correlation approaches. In addition, the representations learned with this selection mechanism are consistent withrecent neurophysiological results of von der Heydt, Zhou, Friedman, and Poggio [8] for cells in cortical visual area V2. Combining multi-scale biologically-plausible image processing with the power of the mixture-of-experts learning algorithm represents a promising approach that yields both high performance and new insights into visual system function.


Statistically Efficient Estimations Using Cortical Lateral Connections

Neural Information Processing Systems

Coarse codes are widely used throughout the brain to encode sensory andmotor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient, i.e., the variance of the estimate is much larger than the smallest possible variance,or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimationproblem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform theseestimation in an optimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections inthe cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.


A Model of Spatial Representations in Parietal Cortex Explains Hemineglect

Neural Information Processing Systems

We have recently developed a theory of spatial representations in which the position of an object is not encoded in a particular frame of reference but, instead, involves neurons computing basis functions of their sensory inputs. This type of representation is able to perform nonlinear sensorimotor transformations and is consistent with the response properties of parietal neurons. We now ask whether the same theory could account for the behavior of human patients with parietal lesions. These lesions induce a deficit known as hemineglect that is characterized by a lack of reaction to stimuli located in the hemispace contralateral to the lesion. A simulated lesion in a basis function representation was found to replicate three of the most important aspects of hemineglect: i) The models failed to cross the leftmost lines in line cancellation experiments, ii) the deficit affected multiple frames of reference and, iii) it could be object centered. These results strongly support the basis function hypothesis for spatial representations and provide a computational theory of hemineglect at the single cell level. 1 Introduction According to current theories of spatial representations, the positions of objects are represented in multiple modules throughout the brain, each module being specialized for a particular sensorimotor transformation and using its own frame of reference. For instance, the lateral intraparietal area (LIP) appears to encode the location of objects in oculocentric coordinates, presumably for the control of saccadic eye movements.