Neural Information Processing Systems
Hierarchical Learning Control - An Approach with Neuron-Like Associative Memories
In this paper research of the second line is described: Starting from a neurophysiologically inspired model of stimulusresponse (SR)and/or associative memorization and a psychologically motivated ministructure for basic control tasks, preconditions and conditions are studied for cooperation of such units in a hierarchical organisation, as can be assumed to be the general layout of macrostructures in the brain. I. INTRODUCTION Theoretic modelling in brain theory is a highly speculative subject. However, it is necessary since it seems very unlikely to get a clear picture of this very complicated device by just analyzing theavailable measurements on sound and/or damaged brain parts only. As in general physics, one has to realize, that there are different levels of modelling: in physics stretching from the atomary levelover atom assemblies till up to general behavioural models like kinematics and mechanics, in brain theory stretching from chemical reactions over electrical spikes and neuronal cell assembly cooperation till general human behaviour. The research discussed in this paper is located just above the direct study of synaptic cooperation of neuronal cell assemblies as studied e. g. in /Amari 1988/. It takes into account the changes of synaptic weighting, without simulating the physical details of such changes, and makes use of a general imitation of learning situation (stimuli) - response connections for building up trainable basic control loops, which allow dynamic SR memorization and which are themsel ves elements of some more complex behavioural loops.
Microelectronic Implementations of Connectionist Neural Networks
Mackie, Stuart, Graf, Hans Peter, Schwartz, Daniel B., Denker, John S.
Three chip designs are described: a hybrid digital/analog programmable connection matrix, an analog connection matrix with adjustable connection strengths, and a digital pipelined best-match chip. The common feature of the designs is the distribution of arithmetic processing power amongst the data storage to minimize data movement.
Learning on a General Network
LEARNING ON A GENERAL NETWORK Amir F. Atiya Department of Electrical Engineering California Institute of Technology Ca 91125 Abstract This paper generalizes the backpropagation method to a general network containing feedback t;onnections.The network model considered consists of interconnected groups of neurons, where each group could be fully interconnected (it could have feedback connections, with possibly asymmetricweights), but no loops between the groups are allowed. A stochastic descent algorithm is applied, under a certain inequality constraint on each intragroup weight matrix which ensures for the network to possess a unique equilibrium state for every input. Introduction Ithas been shown in the last few years that large networks of interconnected "neuron" -like elemp.nts One of the well-known neural network models is the backpropagation model [1]-[4]. It is an elegant way for teaching a layered feedforward network by a set of given input/output examples.
A Neural-Network Solution to the Concentrator Assignment Problem
Tagliarini, Gene A., Page, Edward W.
Thispaper presents a neural-net solution to a resource allocation problem that arises in providing local access to the backbone of a wide-area communication network.The problem is described in terms of an energy function that can be mapped onto an analog computational network. Simulation results characterizing the performance of the neural computation are also presented. INTRODUCTION This paper presents a neural-network solution to a resource allocation problem that arises in providing access to the backbone of a communication network. 1 Inthe field of operations research, this problem was first known as the warehouse location problem and heuristics for finding feasible, suboptimal solutions have been developed previously.2.
HIGH DENSITY ASSOCIATIVE MEMORIES
A"'ir Dembo Information Systems Laboratory, Stanford University Stanford, CA 94305 Ofer Zeitouni Laboratory for Information and Decision Systems MIT, Cambridge, MA 02139 ABSTRACT A class of high dens ity assoc iat ive memories is constructed, starting from a description of desired properties those should exhib it. These propert ies include high capac ity, controllable bas ins of attraction and fast speed of convergence. Fortunately enough, the resulting memory is implementable by an artificial Neural Net. I NfRODUCTION Most of the work on assoc iat ive memories has been structure oriented, i.e.. given a Neural architecture, efforts were directed towards the analysis of the resulting network. Issues like capacity, basins of attractions, etc. were the main objects to be analyzed cf., e.g.
Neuromorphic Networks Based on Sparse Optical Orthogonal Codes
Vecchi, Mario P., Salehi, Jawad A.
Synthetic neural nets[1,2] represent an active and growing research field. Fundamental issues, as well as practical implementations with electronic and optical devices are being studied. In addition, several learning algorithms have been studied, for example stochastically adaptivesystems[3] based on many-body physics optimization concepts[4,5]. Signal processing in the optical domain has also been an active field of research. A wide variety of nonlinear all-optical devices are being studied, directed towards applications bothin optical computating and in optical switching.
Teaching Artificial Neural Systems to Drive: Manual Training Techniques for Autonomous Systems
To demonstrate these methods we have trained an ANS network to drive a vehicle through simulated rreeway traffic. I ntJooducticn Computational systems employing fine grained parallelism are revolutionizing the way we approach a number or long standing problems involving pattern recognition and cognitive processing. Thefield spans a wide variety or computational networks, rrom constructs emulating neural runctions, to more crystalline configurations that resemble systolic arrays. Several titles are used to describe this broad area or research, we use the term artificial neural systems (ANS). Our concern inthis work is the use or ANS ror manually training certain types or autonomous systems where the desired rules of behavior are difficult to rormulate. Artificial neural systems consist of a number or processing elements interconnected in a weighted, user-specified fashion, the interconnection weights acting as memory ror the system. Each processing element calculatE', an output value based on the weighted sum or its inputs. In addition, the input data is correlated with the output or desired output (specified by an instructive agent) in a training rule that is used to adjust the interconnection weights.
On Tropistic Processing and Its Applications
It can be shown that a straightforward generalization of the tropism phenomenon allows the efficient implementation of effective algorithms which appear to respond "intelligently" to changing environmental conditions. Examples of the utilization of tropistic processing techniques will be presented in this paper in applications entailing simulated behavior synthesis, path-planning, pattern analysis (clustering), and engineering design optimization. INTRODUCTION The goal of this paper is to present an intuitive overview of a general unsupervised procedure for addressing a variety of system control and cost minimization problems. This procedure is hased on the idea of utilizing "stimuli" produced by the environment in which the systems are designed to operate as basis for dynamically providing the necessary system parameter updates. This is by no means a new idea: countless examples of this approach abound in nature, where innate reactions to specific stimuli ("tropisms" or "taxis" --not to be confused with "instincts") provide organisms with built-in first-order control laws for triggering varied responses [8].