Goto

Collaborating Authors

 Country


Neural Network Implementation Approaches for the Connection Machine

Neural Information Processing Systems

Two approaches are described which allow parallel computation of a model's nonlinear functions, parallel modification of a model's weights, and parallel propagation of a model's activation and error. Each approach also allows a model's interconnect structure to be physically dynamic. A Hopfield model is implemented with each approach at six sizes over the same number of CM processors to provide a performance comparison. INTRODUCflON Simulations of neural network models on digital computers perform various computations by applying linear or nonlinear functions, defined in a program, to weighted sums of integer or real numbers retrieved and stored by array reference. The numerical values are model dependent parameters like time averaged spiking frequency (activation), synaptic efficacy (weight), the error in error back propagation models, and computational temperature in thermodynamic models. The interconnect structure of a particular model is implied by indexing relationships between arrays defined in a program. On the Connection Machine (CM), these relationships are expressed in hardware processors interconnected by a 16-dimensional hypercube communication network. Mappings are constructed to defme higher dimensional interconnectivity between processors on top of the fundamental geometry of the communication network.


Time-Sequential Self-Organization of Hierarchical Neural Networks

Neural Information Processing Systems

Lateral inhibition operating in the surround of firing cells in each layer provides for unsupervised capture of excitation patterns presented by the previous layer. By presenting patterns of increasing complexity, in coordination with network selforganization, higherlevels of the hierarchy capture concepts implicit in the pattern set. INTRODUCTION A fundamental difficulty in self-organization of hierarchical, multi-layered, networks of simple neuron-like cells is the determination of the direction of adjustment of synaptic link weights between neural layers not directly connected to input or output patterns. Several different approaches have been used to address this problem. One is to provide teaching inputs to the cells in internal layers of the hierarchy.


A Computer Simulation of Cerebral Neocortex: Computational Capabilities of Nonlinear Neural Networks

Neural Information Processing Systems

American Institute of Physics 1988 716 Asynthetic neural network simulation of cerebral neocortex was developed based on detailed anatomy and physiology. Processing elements possess temporal nonlinearities and connection patterns similar to those of cortical neurons. The network was able to replicate spatial and temporal integration properties found experimentally in neocortex. A certain level of randomness was found to be crucial for the robustness of at least some of the network's computational capabilities. Emphasis was placed on how synthetic simulations can be of use to the study of both artificial and biological neural networks.


The Connectivity Analysis of Simple Association

Neural Information Processing Systems

The Connectivity Analysis of Simple Association - or-How Many Connections Do You Need! Oregon Graduate Center, Beaverton, OR 97006 ABSTRACT The efficient realization, using current silicon technology, of Very Large Connection Networks (VLCN) with more than a billion connections requires that these networks exhibit a high degree of communication locality. Real neural networks exhibit significant locality, yet most connectionist/neural network models have little. In this paper, the connectivity requirements of a simple associative network are analyzed using communication theory. Several techniques based on communication theory are presented that improve the robustness of the network in the face of sparse, local interconnect structures. Also discussed are some potential problems when information is distributed too widely. INTRODUCTION Connectionist/neural network researchers are learning to program networks that exhibit a broad range of cognitive behavior.


The Connectivity Analysis of Simple Association

Neural Information Processing Systems

The Connectivity Analysis of Simple Association - or-How Many Connections Do You Need! Oregon Graduate Center, Beaverton, OR 97006 ABSTRACT The efficient realization, using current silicon technology, of Very Large Connection Networks (VLCN) with more than a billion connections requires that these networks exhibit a high degree of communication locality. Real neural networks exhibit significant locality, yet most connectionist/neural network models have little. In this paper, the connectivity requirements of a simple associative network are analyzed using communication theory. Several techniques based on communication theory are presented that improve the robustness of the network in the face of sparse, local interconnect structures. Also discussed are some potential problems when information is distributed too widely. INTRODUCTION Connectionist/neural network researchers are learning to program networks that exhibit a broad range of cognitive behavior.


Time-Sequential Self-Organization of Hierarchical Neural Networks

Neural Information Processing Systems

TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL NEURAL NETWORKS Ronald H. Silverman Cornell University Medical College, New York, NY 10021 Andrew S. Noetzel polytechnic University, Brooklyn, NY 11201 ABSTRACT Self-organization of multi-layered networks can be realized by time-sequential organization of successive neural layers. Lateral inhibition operating in the surround of firing cells in each layer provides for unsupervised capture of excitation patterns presented by the previous layer. By presenting patterns of increasing complexity, in coordination with network selforganization, higher levels of the hierarchy capture concepts implicit in the pattern set. INTRODUCTION A fundamental difficulty in self-organization of hierarchical, multi-layered, networks of simple neuron-like cells is the determination of the direction of adjustment of synaptic link weights between neural layers not directly connected to input or output patterns. Several different approaches have been used to address this problem.


HIGH DENSITY ASSOCIATIVE MEMORIES

Neural Information Processing Systems

A"'ir Dembo Information Systems Laboratory, Stanford University Stanford, CA 94305 Ofer Zeitouni Laboratory for Information and Decision Systems MIT, Cambridge, MA 02139 ABSTRACT A class of high dens ity assoc iat ive memories is constructed, starting from a description of desired properties those should exhib it. These propert ies include high capac ity, controllable bas ins of attraction and fast speed of convergence. Fortunately enough, the resulting memory is implementable by an artificial Neural Net. I NfRODUCTION Most of the work on assoc iat ive memories has been structure oriented, i.e.. given a Neural architecture, efforts were directed towards the analysis of the resulting network. Issues like capacity, basins of attractions, etc. were the main objects to be analyzed cf., e.g.


An Optimization Network for Matrix Inversion

Neural Information Processing Systems

Box 150, Cheongryang, Seoul, Korea ABSTRACT Inverse matrix calculation can be considered as an optimization. We have demonstrated that this problem can be rapidly solved by highly interconnected simple neuron-like analog processors. A network for matrix inversion based on the concept of Hopfield's neural network was designed, and implemented with electronic hardware. With slight modifications, the network is readily applicable to solving a linear simultaneous equation efficiently. Notable features of this circuit are potential speed due to parallel processing, and robustness against variations of device parameters.


On Tropistic Processing and Its Applications

Neural Information Processing Systems

ON TROPISTIC PROCESSING AND ITS APPLICATIONS Manuel F. Fernandez General Electric Advanced Technology Laboratories Syracuse, New York 13221 ABSTRACT The interaction of a set of tropisms is sufficient in many cases to explain the seemingly complex behavioral responses exhibited by varied classes of biological systems to combinations of stimuli. It can be shown that a straightforward generalization of the tropism phenomenon allows the efficient implementation of effective algorithms which appear to respond "intelligently" to changing environmental conditions. Examples of the utilization of tropistic processing techniques will be presented in this paper in applications entailing simulated behavior synthesis, path-planning, pattern analysis (clustering), and engineering design optimization. INTRODUCTION The goal of this paper is to present an intuitive overview of a general unsupervised procedure for addressing a variety of system control and cost minimization problems. This procedure is hased on the idea of utilizing "stimuli" produced by the environment in which the systems are designed to operate as basis for dynamically providing the necessary system parameter updates.


Strategies for Teaching Layered Networks Classification Tasks

Neural Information Processing Systems

There is a widespread misconception that the delta-rule is in some sense guaranteed to work on networks without hidden units. As previous authors have mentioned, there is no such guarantee for classification tasks. We will begin by presenting explicit counterexamples illustrating two different interesting ways in which the delta rule can fail. We go on to provide conditions which do guarantee that gradient descent will successfully train networks without hidden units to perform two-category classification tasks. We discuss the generalization of our ideas to networks with hidden units and to multicategory classification tasks.