Technology
A Connectionist Learning Control Architecture for Navigation
A novel learning control architecture is used for navigation. A sophisticated test-bed is used to simulate a cylindrical robot with a sonar belt in a planar environment. The task is short-range homing in the presence of obstacles. The robot receives no global information and assumes no comprehensive world model. Instead the robot receives only sensory information which is inherently limited. A connectionist architecture is presented which incorporates a large amount of a priori knowledge in the form of hard-wired networks, architectural constraints, and initial weights. Instead of hard-wiring static potential fields from object models, myarchitecture learns sensor-based potential fields, automatically adjusting them to avoid local minima and to produce efficient homing trajectories. It does this without object models using only sensory information. This research demonstrates the use of a large modular architecture on a difficult task.
A four neuron circuit accounts for change sensitive inhibition in salamander retina
Teeters, Jeffrey L., Eeckman, Frank H., Werblin, Frank S.
In salamander retina, the response of On-Off ganglion cells to a central flash is reduced by movement in the receptive field surround. Through computer simulation of a 2-D model which takes into account their anatomical and physiological properties, we show that interactions between four neuron types (two bipolar and two amacrine) may be responsible for the generation and lateral conductance of this change sensitive inhibition. The model shows that the four neuron circuit can account for previously observed movement sensitive reductions in ganglion cell sensitivity and allows visualization and prediction of the spatiotemporal pattern of activity in change sensitive retinal cells.
Integrated Segmentation and Recognition of Hand-Printed Numerals
Keeler, James D., Rumelhart, David E., Leow, Wee Kheng
Neural network algorithms have proven useful for recognition of individual, segmented characters. However, their recognition accuracy has been limited by the accuracy of the underlying segmentation algorithm. Conventional, rule-based segmentation algorithms encounter difficulty if the characters are touching, broken, or noisy. The problem in these situations is that often one cannot properly segment a character until it is recognized yet one cannot properly recognize a character until it is segmented. We present here a neural network algorithm that simultaneously segments and recognizes in an integrated system. This algorithm has several novel features: it uses a supervised learning algorithm (backpropagation), but is able to take position-independent information as targets and self-organize the activities of the units in a competitive fashion to infer the positional information. We demonstrate this ability with overlapping hand-printed numerals.
Integrated Modeling and Control Based on Reinforcement Learning and Dynamic Programming
This is a summary of results with Dyna, a class of architectures for intelligent systems based on approximating dynamic programming methods. Dyna architectures integrate trial-and-error (reinforcement) learning and execution-time planning into a single process operating alternately on the world and on a learned forward model of the world. We describe and show results for two Dyna architectures, Dyna-AHC and Dyna-Q. Using a navigation task, results are shown for a simple Dyna-AHC system which simultaneously learns by trial and error, learns a world model, and plans optimal routes using the evolving world model. We show that Dyna-Q architectures (based on Watkins's Q-Iearning) are easy to adapt for use in changing environments.
Stereopsis by a Neural Network Which Learns the Constraints
Khotanzad, Alireza, Lee, Ying-Wung
This paper presents a neural network (NN) approach to the problem of stereopsis. The correspondence problem (finding the correct matches between the pixels of the epipolar lines of the stereo pair from amongst all the possible matches) is posed as a non-iterative many-to-one mapping. A two-layer feed forward NN architecture is developed to learn and code this nonlinear and complex mapping using the back-propagation learning rule and a training set. The important aspect of this technique is that none of the typical constraints such as uniqueness and continuity are explicitly imposed. All the applicable constraints are learned and internally coded by the NN enabling it to be more flexible and more accurate than the existing methods. The approach is successfully tested on several randomdot stereograms. It is shown that the net can generalize its learned mapping to cases outside its training set. Advantages over the Marr-Poggio Algorithm are discussed and it is shown that the NN performance is superIOr.
An Attractor Neural Network Model of Recall and Recognition
Ruppin, Eytan, Yeshurun, Yehezkel
This work presents an Attractor Neural Network (ANN) model of Recall and Recognition. It is shown that an ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the these two main aspects of memory access. Certain psychological phenomena are accounted for, including the effects of list-length, wordfrequency, presentation time, context shift, and aging. Thereafter, the probabilities of successful Recall and Recognition are estimated, in order to possibly enable further quantitative examination of the model. 1 Motivation The goal of this paper is to demonstrate that a Hopfield-based [Hop82] ANN model can qualitatively account for a wide range of experimental psychological data pertaining to the two main aspects of memory access, Recall and Recognition. Recall is defined as the ability to retrieve an item from a list of items (words) originally presented during a previous learning phase, given an appropriate cue (cued RecalQ, or spontaneously (free RecalQ. Recognition is defined as the ability to successfully acknowledge that a certain item has or has not appeared in the tutorial list learned before. The main prospects of ANN modeling is that some parameter values, that in former, 'classical' models of memory retrieval (see e.g.
A Model of Distributed Sensorimotor Control in the Cockroach Escape Turn
Beer, R.D., Kacmarcik, G. J., Ritzmann, R.E., Chiel, H.J.
In response to a puff of wind, the American cockroach turns away and runs. The circuit underlying the initial turn of this escape response consists of three populations of individually identifiable nerve cells and appears to employ distributed representations in its operation. We have reconstructed several neuronal and behavioral properties of this system using simplified neural network models and the backpropagation learning algorithm constrained by known structural characteristics of the circuitry. In order to test and refine the model, we have also compared the model's responses to various lesions with the insect's responses to similar lesions.
Reconfigurable Neural Net Chip with 32K Connections
Graf, H. P., Janow, R., Henderson, D., Lee, R.
We describe a CMOS neural net chip with a reconfigurable network architecture. It contains 32,768 binary, programmable connections arranged in 256 'building block' neurons. Several'building blocks' can be connected to form long neurons with up to 1024 binary connections or to form neurons with analog connections. Single-or multi-layer networks can be implemented with this chip. We have integrated this chip into a board system together with a digital signal processor and fast memory.
Discovering Viewpoint-Invariant Relationships That Characterize Objects
Zemel, Richard S., Hinton, Geoffrey E.
Richard S. Zemel and Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, ONT M5S lA4 Abstract Using an unsupervised learning procedure, a network is trained on an ensemble of images of the same two-dimensional object at different positions, orientations and sizes. Each half of the network "sees" one fragment of the object, and tries to produce as output a set of 4 parameters that have high mutual information with the 4 parameters output by the other half of the network. Given the ensemble of training patterns, the 4 parameters on which the two halves of the network can agree are the position, orientation, and size of the whole object, or some recoding of them. After training, the network can reject instances of other shapes by using the fact that the predictions made by its two halves disagree. If two competing networks are trained on an unlabelled mixture of images of two objects, they cluster the training cases on the basis of the objects' shapes, independently of the position, orientation, and size. 1 INTRODUCTION A difficult problem for neural networks is to recognize objects independently of their position, orientation, or size.