Not enough data to create a plot.
Try a different view from the menu above.
Country
Efficient Parallel Learning Algorithms for Neural Networks
Kramer, Alan H., Sangiovanni-Vincentelli, Alberto
Parallelizable optimization techniques are applied to the problem of learning in feedforward neural networks. In addition to having superior convergenceproperties, optimization techniques such as the Polak Ribiere method are also significantly more efficient than the Backpropagation algorithm.These results are based on experiments performed on small boolean learning problems and the noisy real-valued learning problem of handwritten character recognition. 1 INTRODUCTION The problem of learning in feedforward neural networks has received a great deal of attention recently because of the ability of these networks to represent seemingly complex mappings in an efficient parallel architecture. This learning problem can be characterized as an optimization problem, but it is unique in several respects. Function evaluation is very expensive. However, because the underlying network is parallel in nature, this evaluation is easily parallelizable. In this paper, we describe the network learning problem in a numerical framework and investigate parallel algorithms for its solution. Specifically, we compare the performance of several parallelizable optimization techniques to the standard Back-propagation algorithm. Experimental results show the clear superiority of the numerical techniques. 2 NEURAL NETWORKS A neural network is characterized by its architecture, its node functions, and its interconnection weights. In a learning problem, the first two of these are fixed, so that the weight values are the only free parameters in the system.
Automatic Local Annealing
Jared Leinbach Deparunent of Psychology Carnegie-Mellon University Pittsburgh, PA 15213 ABSTRACT This research involves a method for finding global maxima in constraint satisfaction networks. It is an annealing process butt unlike most otherst requires no annealing schedule. Temperature is instead determined locally by units at each updatet and thus all processing is done at the unit level. There are two major practical benefits to processing this way: 1) processing can continue in'badt areas of the networkt while'goodt areas remain stablet and 2) processing continues in the'bad t areast as long as the constraints remain poorly satisfied (i.e. it does not stop after some predetermined number of cycles). As a resultt this method not only avoids the kludge of requiring an externally determined annealing schedulet but it also finds global maxima more quickly and consistently than externally scheduled systems (a comparison to the Boltzmann machine (Ackley et alt 1985) is made).
An Optimality Principle for Unsupervised Learning
We propose an optimality principle for training an unsupervised feedforwardneural network based upon maximal ability to reconstruct the input data from the network outputs. Wedescribe an algorithm which can be used to train either linear or nonlinear networks with certain types of nonlinearity. Examples of applications to the problems of image coding, feature detection, and analysis of randomdot stereogramsare presented.
Analog Implementation of Shunting Neural Networks
Nabet, Bahram, Darling, Robert B., Pinter, Robert B.
The first case shows recurrent activity, while the second case is non-recurrent or feed forward. The polarity of these terms signify excitatory or inhibitory interactions. Shunting network equations can be derived from various sources such as the passive membrane equation with synaptic interaction (Grossberg 1973, Pinter 1983), models of dendritic interaction (RaIl 1977), or experiments on motoneurons (Ellias and Grossberg 1975). While the exact mechanisms of synaptic interactions are not known in every individual case,neurobiological evidence of shunting interactions appear in several 696 Nabet, Darling and Pinter areas such as sensory systems, cerebellum, neocortex, and hippocampus (Grossberg 1973, Pinter 1987). In addition to neurobiology, these networks have been used to successfully explain data from disciplines ranging from population biology (Lotka 1956) to psychophysics and behavioral psychology (Grossberg 1983). Shunting nets have important advantages over additive models which lack the extra nonlinearityintroduced by the multiplicative terms.
Programmable Analog Pulse-Firing Neural Networks
Hamilton, Alister, Murray, Alan F., Tarassenko, Lionel
ABSTRACT We describe pulse - stream firing integrated circuits that implement asynchronousanalog neural networks. Synaptic weights are stored dynamically, and weighting uses time-division of the neural pulses from a signalling neuron to a receiving neuron. MOS transistors in their "ON" state act as variable resistors to control a capacitive discharge, and time-division is thus achieved by a small synapse circuit cell. The VLSI chip set design uses 2.5J.1.m INTRODUCTION Neural network implementations fall into two broad classes - digital [1,2] and analog (e.g.
A Low-Power CMOS Circuit Which Emulates Temporal Electrical Properties of Neurons
Meador, Jack L., Cole, Clint S.
Popular neuron models are based upon some statistical measure of known natural behavior. Whether that measure is expressed in terms of average firing rate or a firing probability, the instantaneous neuron activation is only represented in an abstract sense. Artificial electronic neurons derived from these models represent this excitation level as a binary code or a continuous voltage at the output of a summing amplifier. While such models have been shown to perform well for many applications, andform an integral part of much current work, they only partially emulate the manner in which natural neural networks operate. They ignore, for example, differences in relative arrival times of neighboring action potentials -- an important characteristic known to exist in natural auditory and visual networks {Sejnowski, 1986}. They are also less adaptable to fme-grained, neuron-centered learning, like the post-tetanic facilitation observed in natural neurons. We are investigating the implementation and application of neuron circuits which better approximate natural neuron function.
Neural Control of Sensory Acquisition: The Vestibulo-Ocular Reflex
Paulin, Michael G., Nelson, Mark E., Bower, James M.
In this paper we explore this idea by examining the function a simple cerebellar-related behavior, the vestibula-ocular reflex or VOR, in which eye movements are generated to minimize image slip on the retina during rapid head movements. Considering this system from the point of view of statistical estimation theory, our results suggest of the VOR, often regarded as a static orthat the transfer function slowly modifiable feature of the system, should actually be continuously and rapidly changed during head movements. We further suggest that these changes are under the direct control of the cerebellar cortex and propose experiments to test this hypothesis.
The Advanced Architectures Project
The Advanced Architectures Project at Stanford University's Knowledge Systems Laboratory seeks to gain higher performance for expert system applications through the design of new, innovative software and hardware architectures. This research concentrates particularly on the use of parallel machines to gain speedup and the design of the software to exploit emergent paral-lel hardware architectures. This article describes the project and details its goals and the work performed in the pursuance of these goals. A brief description is given of each of the project components, and a complete bibliography appears of the publications produced for the project.
Review of Design Automation: Automated Full-Custom VLSI Layout Using the Ulysses Design Environment
The designer's input can be manually added to Design Automation: Automated Full-which itself is awkward) in the The author is criticizing the capability Custom VLSI Layout Using the Ulysses script environment, which considerably of the Weaver system (a knowledge-based Design Environment (Academic Press, reduces the power and authority circuit interconnections Boston, Massachusetts, 1988, 463 of the demonstration. This disappointing router) to restart, continue (that is, to pages) by Michael L. Bushnell deals demonstration might be the be interrupted), or accept that a user with an interesting and challenging result of the project's ambitious nature might specify some routing channels. A The book is misleading in its treatment achieve. The problem here is not the system called Ulysses that implements of some key points. Any routing expert blackboard architecture is described.
Databases in Large AI Systems
Friesen, Oris D., Golshani, Forouzan
Databases are at the heart of most real-world knowledge base systems. The management and effective use of these databases will be the limiting factors in our ability to build ever more complex AI systems. This article reports on a workshop that explored how databases and their associated technologies can best be used in the development of large AI applications.