Information Technology
Simulation of the Neocognitron on a CCD Parallel Processing Architecture
Chuang, Michael L., Chiang, Alice M.
The neocognitron is a neural network for pattern recognition and feature extraction. An analog CCD parallel processing architecture developed at Lincoln Laboratory is particularly well suited to the computational requirements ofshared-weight networks such as the neocognitron, and implementation of the neocognitron using the CCD architecture was simulated. A modification to the neocognitron training procedure, which improves network performance under the limited arithmetic precision that would be imposed by the CCD architecture, is presented.
Evolution and Learning in Neural Networks: The Number and Distribution of Learning Trials Affect the Rate of Evolution
Learning can increase the rate of evolution of a population of biological organisms (the Baldwin effect). Our simulations show that in a population of artificial neural networks solving a pattern recognition problem, no learning or too much learning leads to slow evolution of the genes whereas an intermediate amount is optimal. Moreover, for a given total number of training presentations, fastest evoution occurs if different individuals within each generation receive different numbers of presentations, rather than equal numbers. Because genetic algorithms (GAs) help avoid local minima in energy functions, our hybrid learning-GA systems can be applied successfully to complex, highdimensional patternrecognition problems. INTRODUCTION The structure and function of a biological network derives from both its evolutionary precursors and real-time learning.
Transforming Neural-Net Output Levels to Probability Distributions
John S. Denker and Yann leCun AT&T Bell Laboratories Holmdel, NJ 07733 Abstract (1) The outputs of a typical multi-output classification network do not satisfy the axioms of probability; probabilities should be positive and sum to one. This problem can be solved by treating the trained network as a preprocessor that produces a feature vector that can be further processed, for instance by classical statistical estimation techniques. It is particularly useful to combine these two ideas: we implement the ideas of section 1 using Parzen windows, where the shape and relative size of each window is computed using the ideas of section 2. This allows us to make contact between important theoretical ideas (e.g. the ensemble formalism) and practical techniques (e.g. Our results also shed new light on and generalize the well-known "softmax" scheme. 1 Distribution of Categories in Output Space In many neural-net applications, it is crucial to produce a set of C numbers that serve as estimates of the probability of C mutually exclusive outcomes. For example, inspeech recognition, these numbers represent the probability of C different phonemes; the probabilities of successive segments can be combined using a Hidden Markov Model.
A Short-Term Memory Architecture for the Learning of Morphophonemic Rules
In the debate over the power of connectionist models to handle linguistic phenomena, considerableattention has been focused on the learning of simple morphological rules. It is a straightforward matter in a symbolic system to specify how the meanings ofa stem and a bound morpheme combine to yield the meaning of a whole word and how the form of the bound morpheme depends on the shape of the stem. In a distributed connectionist system, however, where there may be no explicit morphemes, words, or rules, things are not so simple. The most important work in this area has been that of Rumelhart and McClelland (1986), together with later extensions by Marchman and Plunkett (1989). The networks involvedwere trained to associate English verb stems with the corresponding past-tense forms, successfully generating both regular and irregular forms and generalizing tonovel inputs. This work established that rule-like linguistic behavior 605 606 Gasser and Lee could be achieved in a system with no explicit rules. However, it did have important limitations, among them the following: 1. The representation of linguistic form was inadequate. This is clear, for example, fromthe fact that distinct lexical items may be associated with identical representations (Pinker & Prince, 1988).
Navigating through Temporal Difference
Barto, Sutton and Watkins [2] introduced a grid task as a didactic example oftemporal difference planning and asynchronous dynamical pre gramming. Thispaper considers the effects of changing the coding of the input stimulus, and demonstrates that the self-supervised learning of a particular form of hidden unit representation improves performance.
Generalization by Weight-Elimination with Application to Forecasting
Weigend, Andreas S., Rumelhart, David E., Huberman, Bernardo A.
Bernardo A. Huberman Dynamics of Computation XeroxPARC Palo Alto, CA 94304 Inspired by the information theoretic idea of minimum description length, we add a term to the back propagation cost function that penalizes network complexity. We give the details of the procedure, called weight-elimination, describe its dynamics, and clarify the meaning of the parameters involved. From a Bayesian perspective, the complexity term can be usefully interpreted as an assumption about prior distribution of the weights. We use this procedure to predict the sunspot time series and the notoriously noisy series of currency exchange rates. 1 INTRODUCTION Learning procedures for connectionist networks are essentially statistical devices for performing inductiveinference. There is a tradeoff between two goals: on the one hand, we want such devices to be as general as possible so that they are able to learn a broad range of problems.
A Delay-Line Based Motion Detection Chip
Horiuchi, Tim, Lazzaro, John, Moore, Andrew, Koch, Christof
Inspired by a visual motion detection model for the ra.bbit retina and by a computational architecture used for early audition in the barn owl, we have designed a chip that employs a correlation model to report the one-dimensional field motion of a scene in real time. Using subthreshold analog VLSI techniques, we have fabricated and successfully tested a 8000 transistor chip using a standard MOSIS process.
A competitive modular connectionist architecture
Jacobs, Robert A., Jordan, Michael I.
We describe a multi-network, or modular, connectionist architecture that captures that fact that many tasks have structure at a level of granularity intermediate to that assumed by local and global function approximation schemes. The main innovation of the architecture is that it combines associative and competitive learning in order to learn task decompositions. A task decomposition is discovered by forcing the networks comprising the architecture to compete to learn the training patterns. As a result of the competition, different networks learn different training patterns and, thus, learn to partition the input space. The performance of the architecture on a "what" and "where" vision task and on a multi-payload robotics task are presented.
On Stochastic Complexity and Admissible Models for Neural Network Classifiers
Padhraic Smyth Communications Systems Research Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91109 Abstract Given some training data how should we choose a particular network classifier froma family of networks of different complexities? In this paper we discuss how the application of stochastic complexity theory to classifier design problems can provide some insights into this problem. In particular we introduce the notion of admissible models whereby the complexity of models under consideration is affected by (among other factors) the class entropy, the amount of training data, and our prior belief. In particular we discuss the implications of these results with respect to neural architectures anddemonstrate the approach on real data from a medical diagnosis task. 1 Introduction and Motivation In this paper we examine in a general sense the application of Minimum Description Length (MDL) techniques to the problem of selecting a good classifier from a large set of candidate models or hypotheses. Pattern recognition algorithms differ from more conventional statistical modeling techniques in the sense that they typically choose from a very large number of candidate models to describe the available data.