Goto

Collaborating Authors

 Country




Using hippocampal 'place cells' for navigation, exploiting phase coding

Neural Information Processing Systems

These are compared with single unit recordings and behavioural data. The firing of CAl place cells is simulated as the (artificial) rat moves in an environment. Thisis the input for a neuronal network whose output, at each theta (0) cycle, is the next direction of travel for the rat. Cells are characterised by the number of spikes fired and the time of firing with respect to hippocampal 0 rhythm. 'Learning' occurs in'on-off' synapses that are switched on by simultaneous pre-and post-synaptic activity.


Learning Spatio-Temporal Planning from a Dynamic Programming Teacher: Feed-Forward Neurocontrol for Moving Obstacle Avoidance

Neural Information Processing Systems

The action network is embedded in a sensorymotoric systemarchitecture that contains a separate world model. It is continuously fed with short-term predicted spatiotemporal obstacle trajectories, and receives robot state feedback. The action netallows for external switching between alternative planning tasks.It generates goal-directed motor actions - subject to the robot's kinematic and dynamic constraints - such that collisions withmoving obstacles are avoided. Using supervised learning, we distribute examples of the optimal planner mapping over a structure-level adapted parsimonious higher order network. The training database is generated by a Dynamic Programming algorithm. Extensivesimulations reveal, that the local planner mapping is highly nonlinear, but can be effectively and sparsely represented bythe chosen powerful net model. Excellent generalization occurs for unseen obstacle configurations. We also discuss the limitations offeed-forward neurocontrol for growing planning horizons.


Transient Signal Detection with Neural Networks: The Search for the Desired Signal

Neural Information Processing Systems

Matched filtering has been one of the most powerful techniques employed for transient detection. Here we will show that a dynamic neural network outperforms the conventional approach. When the artificial neural network (ANN) is trained with supervised learning schemes there is a need to supply the desired signal for all time, although we are only interested in detecting the transient. In this paper we also show the effects on the detection agreement of different strategies to construct the desired signal. The extension of the Bayes decision rule (011 desired signal), optimal in static classification, performs worse than desired signals constructed by random noise or prediction during the background. 1 INTRODUCTION Detection of poorly defined waveshapes in a nonstationary high noise background is an important and difficult problem in signal processing.


A Method for Learning From Hints

Neural Information Processing Systems

We address the problem of learning an unknown function by pu tting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning fromexamples to learning from hints. A canonical representation ofhints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis ofeach hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use.


Learning Curves, Model Selection and Complexity of Neural Networks

Neural Information Processing Systems

Learning curves show how a neural network is improved as the number of t.raiuing examples increases and how it is related to the network complexity. The present paper clarifies asymptotic properties and their relation of t.wo learning curves, one concerning the predictive loss or generalization loss and the other the training loss. The result gives a natural definition of the complexity of a neural network. Moreover, it provides a new criterion of model selection.


Integration of Visual and Somatosensory Information for Preshaping Hand in Grasping Movements

Neural Information Processing Systems

The primate brain must solve two important problems in grasping movements. Thefirst problem concerns the recognition of grasped objects: specifically, how does the brain integrate visual and motor information on a grasped object? The second problem concerns hand shape planning: specifically, how does the brain design the hand configuration suited to the shape of the object and the manipulation task? A neural network model that solves these problems has been developed.


A Recurrent Neural Network for Generation of Occular Saccades

Neural Information Processing Systems

Electrophysiological studies (Cynader and Berman 1972, Robinson 1972) showed that the intermediate layer of SC is topographically organized into a motor map. The location of active neurons in this area was found to be related to the oculomotor error (Le.


A Formal Model of the Insect Olfactory Macroglomerulus: Simulations and Analytic Results

Neural Information Processing Systems

It is known from biological data that the response patterns of interneurons in the olfactory macroglomerulus (MGC) of insects are of central importance for the coding of the olfactory signal. We propose an analytically tractable model of the MGC which allows us to relate the distribution of response patterns to the architecture of the network.