Plotting

The Connectivity Analysis of Simple Association

Neural Information Processing Systems

The Connectivity Analysis of Simple Association - or-How Many Connections Do You Need! Oregon Graduate Center, Beaverton, OR 97006 ABSTRACT The efficient realization, using current silicon technology, of Very Large Connection Networks (VLCN) with more than a billion connections requires that these networks exhibit a high degree of communication locality. Real neural networks exhibit significant locality, yet most connectionist/neural network models have little. In this paper, the connectivity requirements of a simple associative network are analyzed using communication theory. Several techniques based on communication theory are presented that improve the robustness of the network in the face of sparse, local interconnect structures. Also discussed are some potential problems when information is distributed too widely. INTRODUCTION Connectionist/neural network researchers are learning to program networks that exhibit a broad range of cognitive behavior.


A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY

Neural Information Processing Systems

An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code. We show that the number of classes stored in an N-neuron system is linear in N and significantly more than that obtainable by using the Hopfield type memory as a classifier. I. INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N /(2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method.


Time-Sequential Self-Organization of Hierarchical Neural Networks

Neural Information Processing Systems

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201 ABSTRACT Self-organization of multi-layered networks can be realized by time-sequential organization of successive neural layers. Lateral inhibition operating in the surround of firing cells in each layer provides for unsupervised capture of excitation patterns presented by the previous layer. By presenting patterns of increasing complexity, in coordination with network selforganization, higher levels of the hierarchy capture concepts implicit in the pattern set. INTRODUCTION A fundamental difficulty in self-organization of hierarchical, multi-layered, networks of simple neuron-like cells is the determination of the direction of adjustment of synaptic link weights between neural layers not directly connected to input or output patterns. Several different approaches have been used to address this problem.


HIGH DENSITY ASSOCIATIVE MEMORIES

Neural Information Processing Systems

A"'ir Dembo Information Systems Laboratory, Stanford University Stanford, CA 94305 Ofer Zeitouni Laboratory for Information and Decision Systems MIT, Cambridge, MA 02139 ABSTRACT A class of high dens ity assoc iat ive memories is constructed, starting from a description of desired properties those should exhib it. These propert ies include high capac ity, controllable bas ins of attraction and fast speed of convergence. Fortunately enough, the resulting memory is implementable by an artificial Neural Net. I NfRODUCTION Most of the work on assoc iat ive memories has been structure oriented, i.e.. given a Neural architecture, efforts were directed towards the analysis of the resulting network. Issues like capacity, basins of attractions, etc. were the main objects to be analyzed cf., e.g.


An Optimization Network for Matrix Inversion

Neural Information Processing Systems

Box 150, Cheongryang, Seoul, Korea ABSTRACT Inverse matrix calculation can be considered as an optimization. We have demonstrated that this problem can be rapidly solved by highly interconnected simple neuron-like analog processors. A network for matrix inversion based on the concept of Hopfield's neural network was designed, and implemented with electronic hardware. With slight modifications, the network is readily applicable to solving a linear simultaneous equation efficiently. Notable features of this circuit are potential speed due to parallel processing, and robustness against variations of device parameters.


On Tropistic Processing and Its Applications

Neural Information Processing Systems

ON TROPISTIC PROCESSING AND ITS APPLICATIONS Manuel F. Fernandez General Electric Advanced Technology Laboratories Syracuse, New York 13221 ABSTRACT The interaction of a set of tropisms is sufficient in many cases to explain the seemingly complex behavioral responses exhibited by varied classes of biological systems to combinations of stimuli. It can be shown that a straightforward generalization of the tropism phenomenon allows the efficient implementation of effective algorithms which appear to respond "intelligently" to changing environmental conditions. Examples of the utilization of tropistic processing techniques will be presented in this paper in applications entailing simulated behavior synthesis, path-planning, pattern analysis (clustering), and engineering design optimization. INTRODUCTION The goal of this paper is to present an intuitive overview of a general unsupervised procedure for addressing a variety of system control and cost minimization problems. This procedure is hased on the idea of utilizing "stimuli" produced by the environment in which the systems are designed to operate as basis for dynamically providing the necessary system parameter updates.


Strategies for Teaching Layered Networks Classification Tasks

Neural Information Processing Systems

There is a widespread misconception that the delta-rule is in some sense guaranteed to work on networks without hidden units. As previous authors have mentioned, there is no such guarantee for classification tasks. We will begin by presenting explicit counterexamples illustrating two different interesting ways in which the delta rule can fail. We go on to provide conditions which do guarantee that gradient descent will successfully train networks without hidden units to perform two-category classification tasks. We discuss the generalization of our ideas to networks with hidden units and to multicategory classification tasks.


Neuromorphic Networks Based on Sparse Optical Orthogonal Codes

Neural Information Processing Systems

Synthetic neural nets[1,2] represent an active and growing research field. Fundamental issues, as well as practical implementations with electronic and optical devices are being studied. In addition, several learning algorithms have been studied, for example stochastically adaptive systems[3] based on many-body physics optimization concepts[4,5]. Signal processing in the optical domain has also been an active field of research. A wide variety of nonlinear all-optical devices are being studied, directed towards applications both in optical computating and in optical switching.


Analysis and Comparison of Different Learning Algorithms for Pattern Association Problems

Neural Information Processing Systems

ANALYSIS AND COMPARISON OF DIFFERENT LEARNING ALGORITHMS FOR PATTERN ASSOCIATION PROBLEMS J. Bernasconi Brown Boveri Research Center CH-S40S Baden, Switzerland ABSTRACT We investigate the behavior of different learning algorithms for networks of neuron-like units. As test cases we use simple pattern association problems, such as the XOR-problem and symmetry detection problems. The algorithms considered are either versions of the Boltzmann machine learning rule or based on the backpropagation of errors. We also propose and analyze a generalized delta rule for linear threshold units. We find that the performance of a given learning algorithm depends strongly on the type of units used.


Generalization of Back propagation to Recurrent and Higher Order Neural Networks

Neural Information Processing Systems

Fernando J. Pineda Applied Physics Laboratory, Johns Hopkins University Johns Hopkins Rd., Laurel MD 20707 Abstract A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is determined by dissipative differential equations. The error signal is backpropagated by integrating an associated differential equation. The method is introduced by applying it to the recurrent generalization of the feedforward backpropagation network. The method is extended to the case of higher order networks and to a constrained dynamical system for training a content addressable memory. The essential feature of the adaptive algorithms is that adaptive equation has a simple outer product form.