Neural Information Processing Systems
A Dynamical Approach to Temporal Pattern Processing
Stornetta, W. Scott, Hogg, Tad, Huberman, Bernardo A.
W. Scott Stornetta Stanford University, Physics Department, Stanford, Ca., 94305 Tad Hogg and B. A. Huberman Xerox Palo Alto Research Center, Palo Alto, Ca. 94304 ABSTRACT Recognizing patterns with temporal context is important for such tasks as speech recognition, motion detection and signature verification. We propose an architecture in which time serves as its own representation, and temporal context is encoded in the state of the nodes. We contrast this with the approach of replicating portions of the architecture to represent time. As one example of these ideas, we demonstrate an architecture with capacitive inputs serving as temporal feature detectors in an otherwise standard back propagation model. Experiments involving motion detection and word discrimination serve to illustrate novel features of the system.
HIGH DENSITY ASSOCIATIVE MEMORIES
A"'ir Dembo Information Systems Laboratory, Stanford University Stanford, CA 94305 Ofer Zeitouni Laboratory for Information and Decision Systems MIT, Cambridge, MA 02139 ABSTRACT A class of high dens ity assoc iat ive memories is constructed, starting from a description of desired properties those should exhib it. These propert ies include high capac ity, controllable bas ins of attraction and fast speed of convergence. Fortunately enough, the resulting memory is implementable by an artificial Neural Net. I NfRODUCTION Most of the work on assoc iat ive memories has been structure oriented, i.e.. given a Neural architecture, efforts were directed towards the analysis of the resulting network. Issues like capacity, basins of attractions, etc. were the main objects to be analyzed cf., e.g.
A Method for the Design of Stable Lateral Inhibition Networks that is Robust in the Presence of Circuit Parasitics
Jr., John L. Wyatt, Standley, D. L.
A METHOD FOR THE DESIGN OF STABLE LATERAL INHIBITION NETWORKS THAT IS ROBUST IN THE PRESENCE OF CIRCUIT PARASITICS J.L. WYATT, Jr and D.L. STANDLEY Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, Massachusetts 02139 ABSTRACT In the analog VLSI implementation of neural systems, it is sometimes convenient to build lateral inhibition networks by using a locally connected on-chip resistive grid. A serious problem of unwanted spontaneous oscillation often arises with these circuits and renders them unusable in practice. This paper reports a design approach that guarantees such a system will be stable, even though the values of designed elements and parasitic elements in the resistive grid may be unknown. The method is based on a rigorous, somewhat novel mathematical analysis using Tellegen's theorem and the idea of Popov multipliers from control theory. It is thoroughly practical because the criteria are local in the sense that no overall analysis of the interconnected system is required, empirical in the sense that they involve only measurable frequency response data on the individual cells, and robust in the sense that unmodelled parasitic resistances and capacitances in the interconnection network cannot affect the analysis.
Teaching Artificial Neural Systems to Drive: Manual Training Techniques for Autonomous Systems
To demonstrate these methods we have trained an ANS network to drive a vehicle through simulated rreeway traffic. I ntJooducticn Computational systems employing fine grained parallelism are revolutionizing the way we approach a number or long standing problems involving pattern recognition and cognitive processing. The field spans a wide variety or computational networks, rrom constructs emulating neural runctions, to more crystalline configurations that resemble systolic arrays. Several titles are used to describe this broad area or research, we use the term artificial neural systems (ANS). Our concern in this work is the use or ANS ror manually training certain types or autonomous systems where the desired rules of behavior are difficult to rormulate. Artificial neural systems consist of a number or processing elements interconnected in a weighted, user-specified fashion, the interconnection weights acting as memory ror the system. Each processing element calculatE', an output value based on the weighted sum or its inputs. In addition, the input data is correlated with the output or desired output (specified by an instructive agent) in a training rule that is used to adjust the interconnection weights.
Analysis of Distributed Representation of Constituent Structure in Connectionist Systems
A general method, the tensor product representation, is described for the distributed representation of value/variable bindings. The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it penn its recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them. Introduction Any model of complex infonnation processing in networks of simple processors must solve the problem of representing complex structures over network elements. Connectionist models of realistic natural language processing, for example, must employ computationally adequate representations of complex sentences. Many connectionists feel that to develop connectionist systems with the computational power required by complex tasks, distributed representations must be used: an individual processing unit must participate in the representation of multiple items, and each item must be represented as a pattern of activity of multiple processors. Connectionist models have used more or less distributed representations of more or less complex structures, but little if any general analysis of the problem of distributed representation of complex infonnation has been carried out This paper reports results of an analysis of a general method called the tensor product representation.
Simulations Suggest Information Processing Roles for the Diverse Currents in Hippocampal Neurons
ABSTRACT A computer model of the hippocampal pyramidal cell (HPC) is described which integrates data from a variety of sources in order to develop a consistent description for this cell type. The model presently includes descriptions of eleven nonlinear somatic currents of the HPC, and the electrotonic structure of the neuron is modelled with a soma/short-cable approximation. Model simulations qualitatively or quantitatively reproduce a wide range of somatic electrical behavior i HPCs, and demonstrate possible roles for the various currents in information processing. There are several substrates for neuronal computation, including connectivity, synapses, morphometries of dendritic trees, linear parameters of cell membrane, as well as nonlinear, time-varying membrane conductances, also referred to as currents or channels. In the classical description of neuronal function, the contribution of membrane channels is constrained to that of generating the action potential, setting firing threshold, and establishing the relationship between (steady-state) stimulus intensity and firing frequency.
A Neural-Network Solution to the Concentrator Assignment Problem
Tagliarini, Gene A., Page, Edward W.
A NEURAL-NETWORK SOLUTION TO THE CONCENTRATOR ASSIGNNlENT PROBLEM Gene A. Tagliarini Edward W. Page Department of Computer Science, Clemson University, Clemson, SC 29634-1906 ABSTRACT Networks of simple analog processors having neuron-like properties have been employed to compute good solutions to a variety of optimization problems. This paper presents a neural-net solution to a resource allocation problem that arises in providing local access to the backbone of a wide-area communication network. The problem is described in terms of an energy function that can be mapped onto an analog computational network. Simulation results characterizing the performance of the neural computation are also presented. INTRODUCTION This paper presents a neural-network solution to a resource allocation problem that arises in providing access to the backbone of a communication network. 1 In the field of operations research, this problem was first known as the warehouse location problem and heuristics for finding feasible, suboptimal solutions have been developed previously.2.
A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY
Chiueh, Tzi-Dar, Goodman, Rodney
An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code. We show that the number of classes stored in an N-neuron system is linear in N and significantly more than that obtainable by using the Hopfield type memory as a classifier. I. INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N /(2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method.