Country
Cricket Wind Detection
A great deal of interest has recently been focused on theories concerning parallel distributed processing in central nervous systems. In particular, many researchers have become very interested in the structure and function of "computational maps" in sensory systems. As defined in a recent review (Knudsen et al, 1987), a "map" is an array of nerve cells, within which there is a systematic variation in the "tuning" of neighboring cells for a particular parameter. For example, the projection from retina to visual cortex is a relatively simple topographic map; each cortical hypercolumn itself contains a more complex "computational" map of preferred line orientation representing the angle of tilt of a simple line stimulus. The overall goal of the research in my lab is to determine how a relatively complex mapped sensory system extracts and encodes information from external stimuli.
Learning Sequential Structure in Simple Recurrent Networks
Servan-Schreiber, David, Cleeremans, Axel, McClelland, James L.
This tendency to preserve information about the path is not a characteristic of traditional finite-state automata. ENCODING PATH INFORMATION In a different set of experiments, we asked whether the SRN could learn to use the infonnation about the path that is encoded in the hidden units' patterns of activation. In one of these experiments, we tested whether the network could master length constraints. When strings generated from the small finite-state grammar may only have a maximum of 8 letters, the prediction following the presentation of the same letter in position number six or seven may be different. For example, following the sequence'TSSSXXV', 'V' is the seventh letter and only another'V' would be a legal successor.
Neural Architecture
Valentino Braitenberg Max Planck Institute Federal Republic of Germany While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform.
Further Explorations in Visually-Guided Reaching: Making MURPHY Smarter
Visual guidance of a multi-link arm through a cluttered workspace is known to be an extremely difficult computational problem. Classical approaches in the field of robotics have typically broken the problem into pieces of manageable size, including modules for direct and inverse kinematics and dynamics [7], along with a variety of highly complex algorithms for motion planning in the configuration space of a multi-link arm (e.g.
Neural Networks that Learn to Discriminate Similar Kanji Characters
Mori, Yoshihiro, Yokosawa, Kazuhiko
Yoshihiro Morl Kazuhiko Yokosawa ATR Auditory and Visual Perception Research Laboratories 2-1-61 Shiromi Higashiku Osaka 540 Japan ABSTRACT A neural network is applied to the problem of recognizing Kanji characters. The recognition accuracy was higher than that of conventional methods. An analysis of connection weights showed that trained networks can discern the hierarchical structure of Kanji characters. This strategy of trained networks makes high recognition accuracy possible. Our results suggest that neural networks are very effective for Kanji character recognition. 1 INTRODUCTION Neural networks are applied to recognition tasks in many fields.
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.