Goto

Collaborating Authors

 Technology


Where Does the Population Vector of Motor Cortical Cells Point during Reaching Movements?

Neural Information Processing Systems

Visually-guided arm reaching movements are produced by distributed neural networks within parietal and frontal regions of the cerebral cortex. Experimental data indicate that (I) single neurons in these regions are broadly tuned to parameters of movement; (2) appropriate commands are elaborated by populations of neurons; (3) the coordinated action of neurons can be visualized using a neuronal population vector (NPV). However, the NPV provides only a rough estimate of movement parameters (direction, velocity) and may even fail to reflect the parameters of movement when arm posture is changed. We designed a model of the cortical motor command to investigate the relation between the desired direction of the movement, the actual direction of movement and the direction of the NPV in motor cortex. The model is a two-layer self-organizing neural network which combines broadly-tuned (muscular) proprioceptive and (cartesian) visual information to calculate (angular) motor commands for the initial part of the movement of a two-link arm. The network was trained by motor babbling in 5 positions. Simulations showed that (1) the network produced appropriate movement direction over a large part of the workspace; (2) small deviations of the actual trajectory from the desired trajectory existed at the extremities of the workspace; (3) these deviations were accompanied by large deviations of the NPV from both trajectories. These results suggest the NPV does not give a faithful image of cortical processing during arm reaching movements.


Computation of Smooth Optical Flow in a Feedback Connected Analog Network

Neural Information Processing Systems

In 1986, Tanner and Mead [1] implemented an interesting constraint satisfaction circuit for global motion sensing in a VLSI. We report here a new and improved a VLSI implementation that provides smooth optical flow as well as global motion in a two dimensional visual field. The computation of optical flow is an ill-posed problem, which expresses itself as the aperture problem. However, the optical flow can be estimated by the use of regularization methods, in which additional constraints are introduced in terms of a global energy functional that must be minimized. We show how the algorithmic constraints of Hom and Schunck [2] on computing smooth optical flow can be mapped onto the physical constraints of an equivalent electronic network.


Divisive Normalization, Line Attractor Networks and Ideal Observers

Neural Information Processing Systems

We explore in this study the statistical properties of this normalization in the presence of noise. Using simulations, we show that divisive normalization is a close approximation to a maximum likelihood estimator, which, in the context of population coding, is the same as an ideal observer. We also demonstrate analytically that this is a general property of a large class of nonlinear recurrent networks with line attractors. Our work suggests that divisive normalization plays a critical role in noise filtering, and that every cortical layer may be an ideal observer of the activity in the preceding layer. Information processing in the cortex is often formalized as a sequence of a linear stages followed by a nonlinearity.


Temporally Asymmetric Hebbian Learning, Spike liming and Neural Response Variability

Neural Information Processing Systems

Recent experimental data indicate that the strengthening or weakening of synaptic connections between neurons depends on the relative timing of pre-and postsynaptic action potentials. A Hebbian synaptic modification rule based on these data leads to a stable state in which the excitatory and inhibitory inputs to a neuron are balanced, producing an irregular pattern of firing. It has been proposed that neurons in vivo operate in such a mode.


Attentional Modulation of Human Pattern Discrimination Psychophysics Reproduced by a Quantitative Model

Neural Information Processing Systems

We previously proposed a quantitative model of early visual processing in primates, based on non-linearly interacting visual filters and statistically efficient decision. We now use this model to interpret the observed modulation of a range of human psychophysical thresholds with and without focal visual attention. Our model - calibrated by an automatic fitting procedure - simultaneously reproduces thresholds for four classical pattern discrimination tasks, performed while attention was engaged by another concurrent task. Our model then predicts that the seemingly complex improvements of certain thresholds, which we observed when attention was fully available for the discrimination tasks, can best be explained by a strengthening of competition among early visual filters. 1 INTRODUCTION What happens when we voluntarily focus our attention to a restricted part of our visual field? Focal attention is often thought as a gating mechanism, which selectively allows a certain spatial location and and certain types of visual features to reach higher visual processes.


Mechanisms of Generalization in Perceptual Learning

Neural Information Processing Systems

The learning of many visual perceptual tasks has been shown to be specific to practiced stimuli, while new stimuli require re-Iearning from scratch. Here we demonstrate generalization using a novel paradigm in motion discrimination where learning has been previously shown to be specific. We trained subjects to discriminate the directions of moving dots, and verified the previous results that learning does not transfer from the trained direction to a new one. However, by tracking the subjects' performance across time in the new direction, we found that their rate of learning doubled. Therefore, learning generalized in a task previously considered too difficult for generalization.


A Model for Associative Multiplication

Neural Information Processing Systems

Despite the fact that mental arithmetic is based on only a few hundred basic facts and some simple algorithms, humans have a difficult time mastering the subject, and even experienced individuals make mistakes. Associative multiplication, the process of doing multiplication by memory without the use of rules or algorithms, is especially problematic.


Barycentric Interpolators for Continuous Space and Time Reinforcement Learning

Neural Information Processing Systems

In order to find the optimal control of continuous state-space and time reinforcement learning (RL) problems, we approximate the value function (VF) with a particular class of functions called the barycentric interpolators. We establish sufficient conditions under which a RL algorithm converges to the optimal VF, even when we use approximate models of the state dynamics and the reinforcement functions.


Tractable Variational Structures for Approximating Graphical Models

Neural Information Processing Systems

Graphical models provide a broad probabilistic framework with applications in speech recognition (Hidden Markov Models), medical diagnosis (Belief networks) and artificial intelligence (Boltzmann Machines). However, the computing time is typically exponential in the number of nodes in the graph. Within the variational framework for approximating these models, we present two classes of distributions, decimatable Boltzmann Machines and Tractable Belief Networks that go beyond the standard factorized approach. We give generalised mean-field equations for both these directed and undirected approximations. Simulation results on a small benchmark problem suggest using these richer approximations compares favorably against others previously reported in the literature. 1 Introduction Graphical models provide a powerful framework for probabilistic inference[l] but suffer intractability when applied to large scale problems.


Maximum-Likelihood Continuity Mapping (MALCOM): An Alternative to HMMs

Neural Information Processing Systems

We describe Maximum-Likelihood Continuity Mapping (MALCOM), an alternative to hidden Markov models (HMMs) for processing sequence data such as speech. While HMMs have a discrete "hidden" space constrained by a fixed finite-automaton architecture, MALCOM has a continuous hidden space-a continuity map-that is constrained only by a smoothness requirement on paths through the space. MALCOM fits into the same probabilistic framework for speech recognition as HMMs, but it represents a more realistic model of the speech production process. To evaluate the extent to which MALCOM captures speech production information, we generated continuous speech continuity maps for three speakers and used the paths through them to predict measured speech articulator data. The median correlation between the MALCOM paths obtained from only the speech acoustics and articulator measurements was 0.77 on an independent test set not used to train MALCOM or the predictor.