Information Technology
Generalized Hopfield Networks and Nonlinear Optimization
Reklaitis, Gintaras V., Tsirukis, Athanasios G., Tenorio, Manoel Fernando
Purdue University W. Lafayette, IN. 47907 ABSTRACT A nonlinear neural framework, called the Generalized Hopfield network, is proposed, which is able to solve in a parallel distributed manner systems of nonlinear equations. The method is applied to the general nonlinear optimization problem. We demonstrate GHNs implementing the three most important optimization algorithms, namely the Augmented Lagrangian, Generalized Reduced Gradient and Successive Quadratic Programming methods. The study results in a dynamic view of the optimization problem and offers a straightforward model for the parallelization of the optimization computations, thus significantly extending the practical limits of problems that can be formulated as an optimization problem and which can gain from the introduction of nonlinearities in their structure (eg. The ability of networks of highly interconnected simple nonlinear analog processors (neurons) to solve complicated optimization problems was demonstrated in a series of papers by Hopfield and Tank (Hopfield, 1984), (Tank, 1986).
An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2
Zhang, Xiru, McKenna, Michael, Mesirov, Jill P., Waltz, David L.
In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2
Neurally Inspired Plasticity in Oculomotor Processes
We have constructed a two axis camera positioning system which is roughly analogous to a single human eye. This Artificial-Eye (Aeye) combinesthe signals generated by two rate gyroscopes with motion information extracted from visual analysis to stabilize its camera. This stabilization process is similar to the vestibulo-ocular response (VOR); like the VOR, A-eye learns a system model that can be incrementally modified to adapt to changes in its structure, performance and environment. A-eye is an example of a robust sensory systemthat performs computations that can be of significant use to the designers of mobile robots. 1 Introduction We have constructed an "artificial eye" (A-eye), an autonomous robot that incorporates atwo axis camera positioning system (figure 1). Like a the human oculomotor system, A-eye can estimate the rotation rate of its body with a gyroscope and estimate therotation rate of its "eye" by measuring image slip
Training Stochastic Model Recognition Algorithms as Networks can Lead to Maximum Mutual Information Estimation of Parameters
One of the attractions of neural network approaches to pattern recognition is the use of a discrimination-based training method. We show that once we have modified the output layer of a multilayer perceptronto provide mathematically correct probability distributions, andreplaced the usual squared error criterion with a probability-based score, the result is equivalent to Maximum Mutual Informationtraining, which has been used successfully to improve theperformance of hidden Markov models for speech recognition. Ifthe network is specially constructed to perform the recognition computations of a given kind of stochastic model based classifier then we obtain a method for discrimination-based training of the parameters of the models. Examples include an HMM-based word discriminator, which we call an'Alphanet'.
A Neural Network to Detect Homologies in Proteins
Bengio, Yoshua, Bengio, Samy, Pouliot, Yannick, Agin, Patrick
Furthemore, sequence similarity often results from common ancestors. Immunoglobulin (Ig) domains are sets of,a-sheets bound 424 Bengio, Bengio, Pouliot and Agin by cysteine bonds and with a characteristic tertiary structure. Such domains are found in many proteins involved in immune, cell adhesion and receptor functions. These proteins collectively form the immunoglobulin superfamily (for review, see Williams and Barclay, 1987). Members of the superfamily often possess several Ig domains.
On the Distribution of the Number of Local Minima of a Random Function on a Graph
Baldi, Pierre, Rinott, Yosef, Stein, Charles
Minimization of energy or error functions has proved to be a useful principle in the design and analysis of neural networks and neural algorithms. A brief list of examples include: the back-propagation algorithm, the use of optimization methods in computational vision, the application of analog networks to the approximate solution of NP complete problems and the Hopfield model of associative memory.
Learning Aspect Graph Representations from View Sequences
Seibert, Michael, Waxman, Allen M.
In our effort to develop a modular neural system for invariant learning andrecognition of 3D objects, we introduce here a new module architecture called an aspect network constructed around adaptive axo-axo-dendritic synapses. This builds upon our existing system (Seibert & Waxman, 1989) which processes 20 shapes and classifies t.hem into view categories (i.e., aspects) invariant to illumination, position, orientat.ion,
A Computational Basis for Phonology
Touretzky, David S., Wheeler, Deirdre W.
Through a combination linguistic analysis, we are attempting to develop a computational basis for the nature of phonology. We present a connectionist architecture that performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes, and introduce a novel additional primitive, clustering. Clustering provides an interesting alternative to both iterative and relaxation accounts of assimilation processes such as vowel harmony. Our resulting model is efficient because it processes utterances entirely in parallel using only feed-forward circuitry.
Incremental Parsing by Modular Recurrent Connectionist Networks
We present a novel, modular, recurrent connectionist network architecture of complexwhich learns to robustly perform incremental parsing sentences. From sequential input, one word at a time, our networks learn to do semantic role assignment, noun phrase attachment, and clause structure recognition for sentences with passive constructions and center embedded clauses. The networks make syntactic and semantic predictions at every point in time, and previous predictions are revised as expectations are affirmed or violated with the arrival of new information. Our networks induce their own "grammar rules" for dynamically transforming an input sequence of words into a syntactic/semantic interpretation.
VLSI Implementation of a High-Capacity Neural Network Associative Memory
Chiueh, Tzi-Dar, Goodman, Rodney M.
In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation 3J.'-CMOSassociative memory (ECAM). The prototype programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via MOS transistors in this design. The prototype chipsub-threshold of performing one associative recall in 3 J.'S.is capable 1 ARCHITECTURE Previously (Chiueh, 1989), we have proposed a general model for correlation-based associative memories, which includes a variant of the Hopfield memory and highorder correlation memories as special cases. This new exponential correlation associative (ECAM) possesses a very large storage capacity, which scalesmemory exponentially with the length of memory patterns (Chiueh, 1988).