Goto

Collaborating Authors

 Technology


Computer Systems that Learn: Classification and Prediction Methods from Statistics

Classics

Full text available for a fee. This book is a practical guide to classification learning systems and their applications. These computer programs learn from sample data and make predictions for new cases, sometimes exceeding the performance of humans. Practical learning systems from statistical pattern recognition, neural networks, and machine learning are presented. The authors examine prominent methods from each area, using an engineering approach and taking the practitioner's viewpoint. Intuitive explanations with a minimum of mathematics make the material accessible to anyone--regardless of experience or special interests. The underlying concepts of the learning methods are discussed with fully worked-out examples: their strengths and weaknesses, and the estimation of their future performance on specific applications. Throughout, the authors offer their own recommendations for selecting and applying learning methods such as linear discriminants, back-propagation neural networks, or decision trees. Learning systems are then contrasted with their rule-based counterparts from expert systems.Morgan Kaufmann, 1990


Can logic programming execute as fast as imperative programming?

Classics

The output is assembly code for the Berkeley Abstract Machine (BAM). Directives hold starting from the next predicate that is input. Clauses do not have to be contiguous in the input stream, however, the whole stream is read before compilation starts. This manual is organized into ten sections.


Neural Network Design and the Complexity of Learning

Classics

MIT Press. See also: A reply to Honavar's book review of Neural Network Design and the Complexity of Learning (https://link.springer.com/article/10.1007%2FBF00993256?LI=true).


Neuronal Maps for Sensory-Motor Control in the Barn Owl

Neural Information Processing Systems

The bam owl has fused visual/auditory/motor representations of space in its midbrain which are used to orient the head so that visual orauditory stimuli are centered in the visual field of view.


Using Backpropagation with Temporal Windows to Learn the Dynamics of the CMU Direct-Drive Arm II

Neural Information Processing Systems

K. Y. Goldberg and B. A. Pearlmutter School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 ABSTRACT Computing the inverse dynamics of a robot ann is an active area of research in the control literature. We hope to learn the inverse dynamics by training a neural network on the measured response of a physical ann. The input to the network is a temporal window of measured positions; output is a vector of torques. We train the network on data measured from the first two joints of the CMU Direct-Drive Arm II as it moves through a randomly-generated sample of "pick-and-place" trajectories. We then test generalization with a new trajectory and compare its output with the torque measured at the physical arm.


Models of Ocular Dominance Column Formation: Analytical and Computational Results

Neural Information Processing Systems

In the developing visual system in many mammalian species, there is initially a uniform, overlappinginnervation of layer 4 of the visual cortex by inputs representing the two eyes. Subsequently, these inputs segregate into patches or stripes that are largely or exclusively innervated by inputs serving a single eye, known as ocular dominance patches. The ocular dominance patches are on a small scale compared to the map of the visual world, so that the initially continuous map becomes two interdigitated maps, one representing each eye. These patches, together with the layers of cortex above and below layer 4, whose responses are dominated by the eye innervating the corresponding layer 4 patch, are known as ocular dominance columns.


Statistical Prediction with Kanerva's Sparse Distributed Memory

Neural Information Processing Systems

ABSTRACT A new viewpoint of the processing performed by Kanerva's sparse distributed memory (SDM) is presented. In conditions of near-or over-capacity, where the associative-memory behavior of the model breaks down, the processing performed by the model can be interpreted as that of a statistical predictor. Mathematical results are presented which serve as the framework for a new statistical viewpoint of sparse distributed memory and for which the standard formulation of SDM is a special case. This viewpoint suggests possible enhancements to the SDM model, including a procedure for improving the predictiveness of the system based on Holland's work with'Genetic Algorithms', and a method for improving the capacity of SDM even when used as an associative memory. OVERVIEW This work is the result of studies involving two seemingly separate topics that proved to share a common framework. The fIrst topic, statistical prediction, is the task of associating extremely large perceptual state vectors with future events.


ALVINN: An Autonomous Land Vehicle in a Neural Network

Neural Information Processing Systems

ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perfOIm the task differs dramatically when the networlc is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.