Plotting

 Technology


Statistical Prediction with Kanerva's Sparse Distributed Memory

Neural Information Processing Systems

ABSTRACT A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near-or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with'Genetic Algorithms', and a method for improving the capacity of SDM even when used as an associative memory. OVERVIEW This work is the result of studies involving two seemingly separate topics that proved to share a common framework. The fIrst topic, statistical prediction, is the task of associating extremely large perceptual state vectors with future events.


ALVINN: An Autonomous Land Vehicle in a Neural Network

Neural Information Processing Systems

ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.



Neural Architecture

Neural Information Processing Systems

While we are waiting for the ultimate biophysics of cell membranes and synapses to be completed, we may speculate on the shapes of neurons and on the patterns of their connections. Much of this will be significant whatever the outcome of future physiology. Take as an example the isotropy, anisotropy and periodicity of different kinds of neural networks. The very existence of these different types in different parts of the brain (or in different brains) defeats explanation in terms of embryology; the mechanisms of development are able to make one kind of network or another. The reasons for the difference must be in the functions they perform.


A Network for Image Segmentation Using Color

Neural Information Processing Systems

Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmenting the scene into regions of different colors, does not require color constancy.


A Connectionist Expert System that Actually Works

Neural Information Processing Systems

ABSTRACf The Space Environment Laboratory in Boulder has collaborated with the University of Colorado to construct a small expert system for solar flare forecasting, called THEa. It performed as well as a skilled human forecaster. We have constructed TheoNet, a three-layer back-propagation connectionist network that learns to forecast flares as well as THEa does. TheoNet's success suggests that a connectionist network can perform the task of knowledge engineering automatically. A study of the internal representations constructed by the network may give insights to the "microstructure" of reasoning processes in the human brain.


Training a Limited-Interconnect, Synthetic Neural IC

Neural Information Processing Systems

Hardware implementation of neuromorphic algorithms is hampered by high degrees of connectivity. Functionally equivalent feedforward networks may be formed by using limited fan-in nodes and additional layers.


Constraints on Adaptive Networks for Modeling Human Generalization

Neural Information Processing Systems

CA 94305 ABSTRACT The potential of adaptive networks to learn categorization rules and to model human performance is studied by comparing how natural and artificial systems respond to new inputs, i.e., how they generalize. Like humans, networks can learn a detenninistic categorization task by a variety of alternative individual solutions. An analysis of the constraints imposed by using networks with the minimal number of hidden units shows that this "minimal configuration" constraint is not sufficient A further analysis of human and network generalizations indicates that initial conditions may provide important constraints on generalization. A new technique, which we call "reversed learning", is described for finding appropriate initial conditions. INTRODUCTION We are investigating the potential of adaptive networks to learn categorization tasks and to model human performance.


Learning by Choice of Internal Representations

Neural Information Processing Systems

We introduce a learning algorithm for multilayer neural networks composed of binary linear threshold elements. Whereas existing algorithms reduce the learning process to minimizing a cost function over the weights, our method treats the internal representations as the fundamental entities to be determined. Once a correct set of internal representations is arrived at, the weights are found by the local aild biologically plausible Perceptron Learning Rule (PLR). We tested our learning algorithm on four problems: adjacency, symmetry, parity and combined symmetry-parity.