Goto

Collaborating Authors

 Country


Visual Development Aids the Acquisition of Motion Velocity Sensitivities

Neural Information Processing Systems

We consider the hypothesis that systems learning aspects of visual perception maybenefit from the use of suitably designed developmental progressions duringtraining. Four models were trained to estimate motion velocities in sequences of visual images. Three of the models were "developmental models"in the sense that the nature of their input changed during the course of training. They received a relatively impoverished visual input early in training, and the quality of this input improved as training progressed. One model used a coarse-to-multiscale developmental progression(i.e. it received coarse-scale motion features early in training and finer-scale features were added to its input as training progressed), another model used a fine-to-multiscale progression, and the third model used a random progression.


Timing and Partial Observability in the Dopamine System

Neural Information Processing Systems

According to a series of influential models, dopamine (DA) neurons signal rewardprediction error using a temporal-difference (TD) algorithm. We address a problem not convincingly solved in these accounts: how to maintain a representation of cues that predict delayed consequences. Our new model uses a TD rule grounded in partially observable semi-Markov processes, a formalism that captures two largely neglected features of DA experiments: hidden state and temporal variability. Previous models predicted rewardsusing a tapped delay line representation of sensory inputs; we replace this with a more active process of inference about the underlying stateof the world. The DA system can then learn to map these inferred states to reward predictions using TD. The new model can explain previouslyvexing data on the responses of DA neurons in the face of temporal variability. By combining statistical model-based learning with a physiologically grounded TD theory, it also brings into contact with physiology some insights about behavior that had previously been confined to more abstract psychological models.


Neural Decoding of Cursor Motion Using a Kalman Filter

Neural Information Processing Systems

The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuousmovement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a control-theoretic approach that explicitly models the motion of the hand and the probabilistic relationship betweenthis motion and the mean firing rates of the cells in 70§ bins. We focus on a realistic cursor control task in which the subject mustmove a cursor to "hit" randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in off-line experiments are more accurate than previously reportedresults and the model provides insights into the nature of the neural coding of movement.


Classifying Patterns of Visual Motion - a Neuromorphic Approach

Neural Information Processing Systems

We report a system that classifies and can learn to classify patterns of visual motion online. The complete system is described by the dynamics ofits physical network architectures. The combination of the following propertiesmakes the system novel: Firstly, the front-end of the system consists of an aVLSI optical flow chip that collectively computes 2-D global visual motion in real-time [1]. Secondly, the complexity of the classification task is significantly reduced by mapping the continuous motiontrajectories to sequences of'motion events'. And thirdly, all the network structures are simple and with the exception of the optical flow chip based on a Winner-Take-All (WTA) architecture. We demonstrate theapplication of the proposed generic system for a contactless man-machine interface that allows to write letters by visual motion. Regarding thelow complexity of the system, its robustness and the already existing front-end, a complete aVLSI system-on-chip implementation is realistic, allowing various applications in mobile electronic devices.


Developing Topography and Ocular Dominance Using Two aVLSI Vision Sensors and a Neurotrophic Model of Plasticity

Neural Information Processing Systems

A neurotrophic model for the co-development of topography and ocular dominance columns in the primary visual cortex has recently been proposed. Inthe present work, we test this model by driving it with the output of a pair of neuronal vision sensors stimulated by disparate moving patterns.We show that the temporal correlations in the spike trains generated by the two sensors elicit the development of refined topography andocular dominance columns, even in the presence of significant amounts of spontaneous activity and fixed-pattern noise in the sensors.


Dynamical Causal Learning

Neural Information Processing Systems

This paper focuses on people's short-run behavior by examining dynamical versions of these three theories, and comparing their predictions to a real-world dataset. 1 Introduction Currently active quantitative models of human causal judgment for single (and sometimes multiple) causes include conditional


Value-Directed Compression of POMDPs

Neural Information Processing Systems

We examine the problem of generating state-space compressions of POMDPs in a way that minimally impacts decision quality. We analyze the impact of compressions ondecision quality, observing that compressions that allow accurate policy evaluation (prediction of expected future reward) will not affect decision quality. Wederive a set of sufficient conditions that ensure accurate prediction in this respect, illustrate interesting mathematical properties these confer on lossless linear compressions,and use these to derive an iterative procedure for finding good linear lossy compressions. We also elaborate on how structured representations of a POMDP can be used to find such compressions.


Recovering Intrinsic Images from a Single Image

Neural Information Processing Systems

We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, each image derivative is classified as being caused by shading or a change in the surface's reflectance. Generalized Belief Propagation is then used to propagate information from areas where the correct classification is clear to areas where it is ambiguous. We also show results on real images.


Fractional Belief Propagation

Neural Information Processing Systems

We consider loopy belief propagation for approximate inference in probabilistic graphical models. A limitation of the standard algorithm is that clique marginals are computed as if there were no loops in the graph. To overcome this limitation, we introduce fractional belief propagation. Fractional belief propagation is formulated in terms of a family of approximate free energies, which includes the Bethe free energy and the naive mean-field free as special cases. Using the linear response correction of the clique marginals, the scale parameters can be tuned. Simulation results illustrate the potential merits of the approach.


Coulomb Classifiers: Generalizing Support Vector Machines via an Analogy to Electrostatic Systems

Neural Information Processing Systems

We introduce a family of classifiers based on a physical analogy to an electrostatic system of charged conductors. The family, called Coulomb classifiers, includes the two best-known support-vector machines (SVMs), the ν-SVM and the C-SVM. In the electrostatics analogy,a training example corresponds to a charged conductor at a given location in space, the classification function corresponds to the electrostatic potential function, and the training objective function corresponds to the Coulomb energy. The electrostatic framework provides not only a novel interpretation of existing algorithms andtheir interrelationships, but it suggests a variety of new methods for SVMs including kernels that bridge the gap between polynomial and radial-basis functions, objective functions that do not require positive-definite kernels, regularization techniques that allow for the construction of an optimal classifier in Minkowski space. Based on the framework, we propose novel SVMs and perform simulationstudies to show that they are comparable or superior tostandard SVMs. The experiments include classification tasks on data which are represented in terms of their pairwise proximities, wherea Coulomb Classifier outperformed standard SVMs.