Goto

Collaborating Authors

 Country


A Model of the Neural Basis of the Rat's Sense of Direction

Neural Information Processing Systems

In the last decade the outlines of the neural structures subserving the sense of direction have begun to emerge. Several investigations have shed light on the effects of vestibular input and visual input on the head direction representation. In this paper, a model is formulated of the neural mechanisms underlying the head direction system. The model is built out of simple ingredients, depending on nothing more complicated than connectional specificity, attractor dynamics, Hebbian learning, and sigmoidal nonlinearities, but it behaves in a sophisticated way and is consistent with most of the observed properties ofreal head direction cells. In addition it makes a number of predictions that ought to be testable by reasonably straightforward experiments.


Associative Decorrelation Dynamics: A Theory of Self-Organization and Optimization in Feedback Networks

Neural Information Processing Systems

This paper outlines a dynamic theory of development and adaptation in neural networks with feedback connections. Given input ensemble, the connections change in strength according to an associative learning rule and approach a stable state where the neuronal outputs are decorrelated. We apply this theory to primary visual cortex and examine the implications of the dynamical decorrelation of the activities of orientation selective cells by the intracortical connections. The theory gives a unified and quantitative explanation of the psychophysical experiments on orientation contrast and orientation adaptation. Using only one parameter, we achieve good agreements between the theoretical predictions and the experimental data.


Optimal Movement Primitives

Neural Information Processing Systems

The theory of Optimal Unsupervised Motor Learning shows how a network can discover a reduced-order controller for an unknown nonlinear system by representing only the most significant modes. Here, I extend the theory to apply to command sequences, so that the most significant components discovered by the network correspond to motion "primitives". Combinations of these primitives can be used to produce a wide variety of different movements. I demonstrate applications to human handwriting decomposition and synthesis, as well as to the analysis of electrophysiological experiments on movements resulting from stimulation of the frog spinal cord. 1 INTRODUCTION There is much debate within the neuroscience community concerning the internal representation of movement, and current neurophysiological investigations are aimed at uncovering these representations. In this paper, I propose a different approach that attempts to define the optimal internal representation in terms of "movement primitives", and I compare this representation with the observed behavior. In this way, we can make strong predictions about internal signal processing.


Hyperparameters Evidence and Generalisation for an Unrealisable Rule

Neural Information Processing Systems

Using a statistical mechanical formalism we calculate the evidence, generalisation error and consistency measure for a linear perceptron trained and tested on a set of examples generated by a non linear teacher. The teacher is said to be unrealisable because the student can never model it without error. Our model allows us to interpolate between the known case of a linear teacher, and an unrealisable, nonlinear teacher. A comparison of the hyperparameters which maximise the evidence with those that optimise the performance measures reveals that, in the nonlinear case, the evidence procedure is a misleading guide to optimising performance. Finally, we explore the extent to which the evidence procedure is unreliable and find that, despite being sub-optimal, in some circumstances it might be a useful method for fixing the hyperparameters. 1 INTRODUCTION The analysis of supervised learning or learning from examples is a major field of research within neural networks.


Deterministic Annealing Variant of the EM Algorithm

Neural Information Processing Systems

We present a deterministic annealing variant of the EM algorithm for maximum likelihood parameter estimation problems. In our approach, the EM process is reformulated as the problem of minimizing the thermodynamic free energy by using the principle of maximum entropy and statistical mechanics analogy. Unlike simulated annealing approaches, this minimization is deterministically performed. Moreover, the derived algorithm, unlike the conventional EM algorithm, can obtain better estimates free of the initial parameter values.


Generalization in Reinforcement Learning: Safely Approximating the Value Function

Neural Information Processing Systems

Reinforcement learning-the problem of getting an agent to learn to act from sparse, delayed rewards-has been advanced by techniques based on dynamic programming (DP). These algorithms compute a value function which gives, for each state, the minimum possible long-term cost commencing in that state. For the high-dimensional and continuous state spaces characteristic of real-world control tasks, a discrete representation of the value function is intractable; some form of generalization is required. A natural way to incorporate generalization into DP is to use a function approximator, rather than a lookup table, to represent the value function. This approach, which dates back to uses of Legendre polynomials in DP [Bellman et al., 19631, has recently worked well on several dynamic control problems [Mahadevan and Connell, 1990, Lin, 1993] and succeeded spectacularly on the game of backgammon [Tesauro, 1992, Boyan, 1992]. On the other hand, many sensible implementations have been less successful [Bradtke, 1993, Schraudolph et al., 1994]. Indeed, given the well-established success 370 Justin Boyan, Andrew W. Moore


Anatomical origin and computational role of diversity in the response properties of cortical neurons

Neural Information Processing Systems

Our results show that maximal diversity of neuronal response properties is attained when the ratio of dendritic and axonal arbor sizes is equal to 1, a value found in many cortical areas and across species (Lund et al., 1993; Malach, 1994). Maximization of diversity also leads to better performance in systems of receptive fields implementing steerablejshiftable filters, which may be necessary for generating the seemingly continuous range of orientation selectivity found in VI, and in ma.tching spatially distributed signals. This cortical organization principle may, therefore, have the double advantage of accounting for the formation of the cortical columns and the associated patchy projection patterns, and of explaining how systems of receptive fields can support functions such as the generation of precise response tuning from imprecise distributed inputs, and the matching of distributed signals, a problem that arises in visual tasks such as stereopsis, motion processing, and recognition.


On the Computational Complexity of Networks of Spiking Neurons

Neural Information Processing Systems

We investigate the computational power of a formal model for networks of spiking neurons, both for the assumption of an unlimited timing precision, and for the case of a limited timing precision. We also prove upper and lower bounds for the number of examples that are needed to train such networks.


Dynamic Cell Structures

Neural Information Processing Systems

Dynamic Cell Structures (DCS) represent a family of artificial neural architectures suited both for unsupervised and supervised learning. They belong to the recently [Martinetz94] introduced class of Topology Representing Networks (TRN) which build perlectly topology preserving feature maps. DCS empI'oy a modified Kohonen learning rule in conjunction with competitive Hebbian learning. The Kohonen type learning rule serves to adjust the synaptic weight vectors while Hebbian learning establishes a dynamic lateral connection structure between the units reflecting the topology of the feature manifold. In case of supervised learning, i.e. function approximation, each neural unit implements a Radial Basis Function, and an additional layer of linear output units adjusts according to a delta-rule. DCS is the first RBF-based approximation scheme attempting to concurrently learn and utilize a perfectly topology preserving map for improved performance. Simulations on a selection of CMU-Benchmarks indicate that the DCS idea applied to the Growing Cell Structure algorithm [Fritzke93] leads to an efficient and elegant algorithm that can beat conventional models on similar tasks.


Spatial Representations in the Parietal Cortex May Use Basis Functions

Neural Information Processing Systems

The parietal cortex is thought to represent the egocentric positions of objects in particular coordinate systems. We propose an alternative approach to spatial perception of objects in the parietal cortex from the perspective of sensorimotor transformations. The responses of single parietal neurons can be modeled as a gaussian function of retinal position multiplied by a sigmoid function of eye position, which form a set of basis functions. We show here how these basis functions can be used to generate receptive fields in either retinotopic or head-centered coordinates by simple linear transformations. This raises the possibility that the parietal cortex does not attempt to compute the positions of objects in a particular frame of reference but instead computes a general purpose representation of the retinal location and eye position from which any transformation can be synthesized by direct projection. This representation predicts that hemineglect, a neurological syndrome produced by parietal lesions, should not be confined to egocentric coordinates, but should be observed in multiple frames of reference in single patients, a prediction supported by several experiments.