Goto

Collaborating Authors

Theory of Self-Organization of Cortical Maps

Neural Information Processing Systems

We have mathematically shown that cortical maps in the primary sensory cortices can be reproduced by using three hypotheses which have physiological basis and meaning. Here, our main focus is on ocular.dominance


Associative Learning via Inhibitory Search

Neural Information Processing Systems

ALVIS is a reinforcement-based connectionist architecture that learns associative maps in continuous multidimensional environments. Thediscovered locations of positive and negative reinforcements arerecorded in "do be" and "don't be" subnetworks, respectively. The outputs of the subnetworks relevant to the current goalare combined and compared with the current location to produce an error vector. This vector is backpropagated through a motor-perceptual mapping network.



Simulation and Measurement of the Electric Fields Generated by Weakly Electric Fish

Neural Information Processing Systems

The weakly electric fish, Gnathonemus peters;;, explores its environment by generating pulsedelecbic fields and detecting small pertwbations in the fields resulting from nearby objects. Accordingly, the fISh detects and discriminates objects on the basis of a sequence of elecbic "images" whose temporal and spatial properties depend on the timing ofthe fish's electric organ discharge and its body position relative to objects in its environmenl Weare interested in investigating how these fish utilize timing and body-position during exploration to aid in object discrimination. We have developed a fmite-element simulation of the fish's self-generated electric fields so as to reconstruct the electrosensory consequencesof body position and electric organ discharge timing in the fish. This paper describes this finite-element simulation system and presents preliminary electric fieldmeasurements which are being used to tune the simulation.


Using Backpropagation with Temporal Windows to Learn the Dynamics of the CMU Direct-Drive Arm II

Neural Information Processing Systems

K. Y. Goldberg and B. A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT Computing the inverse dynamics of a robot ann is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical ann. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm.


Storing Covariance by the Associative Long-Term Potentiation and Depression of Synaptic Strengths in the Hippocampus

Neural Information Processing Systems

We have tested this assumption in the hippocampus, a cortical structure or the brain that is involved in long-term memory. A brier, high-frequency activation or excitatory synapses in the hippocampus produces an increase in synaptic strength known as long-term potentiation, or LTP (BUss and Lomo, 1973), that can last ror many days. LTP is known to be Hebbian since it requires the simultaneous release or neurotransmitter from presynaptic terminals coupled with postsynaptic depolarization (Kelso et al, 1986; Malinow and Miller, 1986; Gustatrson et al, 1987). However, a mechanism ror the persistent reduction or synaptic strength that could balance LTP has not yet been demonstrated. We studied theassociative interactions between separate inputs onto the same dendritic trees or hippocampal pyramidal cells or field CAl, and round that a low-frequency input which, by itselr, does not persistently change synaptic strength, can either increase (associative LTP) or decrease in strength (associative long-term depression or LTD) depending upon whether it is positively or negatively correlated in time with a second, high-frequency bursting input. LTP or synaptic strength is Hebbian, and LTD is anti-Hebbian since it is elicited by pairing presynaptic firing with postsynaptic hyperpolarizationsufficient to block postsynaptic activity.



Self Organizing Neural Networks for the Identification Problem

Neural Information Processing Systems

This work introduces a new method called Self Organizing Neural Network (SONN) algorithm and demonstrates its use in a system identification task. The algorithm constructs the network, chooses the neuron functions, and adjusts the weights. It is compared to the Back-Propagation algorithm in the identification of the chaotic time series. The results shows that SONN constructs a simpler, more accurate model.


Neural Networks that Learn to Discriminate Similar Kanji Characters

Neural Information Processing Systems

Yoshihiro Morl Kazuhiko Yokosawa ATR Auditory and Visual Perception Research Laboratories 2-1-61 Shiromi Higashiku Osaka 540 Japan ABSTRACT A neural network is applied to the problem of recognizing Kanji characters. The recognition accuracy was higher than that of conventional methods. An analysis of connection weights showed that trained networks can discern the hierarchical structure of Kanji characters. This strategy of trained networks makes high recognition accuracy possible. Our results suggest that neural networks are very effective for Kanji character recognition. 1 INTRODUCTION Neural networks are applied to recognition tasks in many fields.


Efficient Parallel Learning Algorithms for Neural Networks

Neural Information Processing Systems

Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.