Goto

Collaborating Authors

 Neural Information Processing Systems


Temporal Patterns of Activity in Neural Networks

Neural Information Processing Systems

Paolo Gaudiano Dept. of Aerospace Engineering Sciences, University of Colorado, Boulder CO 80309, USA January 5, 1988 Abstract Patterns of activity over real neural structures are known to exhibit timedependent behavior.It would seem that the brain may be capable of utilizing temporal behavior of activity in neural networks as a way of performing functions which cannot otherwise be easily implemented. These might include the origination of sequential behavior and the recognition of time-dependent stimuli. A model is presented here which uses neuronal populations with recurrent feedback connections inan attempt to observe and describe the resulting time-dependent behavior. Shortcomings and problems inherent to this model are discussed. Current models by other researchers are reviewed and their similarities and differences discussed.


Optimal Neural Spike Classification

Neural Information Processing Systems

Using one extracellular microelectrode to record from several neurons is one approach to studying the response properties of sets of adjacent and therefore likely related neurons. However, to do this, it is necessary to correctly classify the signals generated by these different neurons. This paper considers this problem of classifying the signals in such an extracellular recording, based upon their shapes, and specifically considers the classification of signals in the case when spikes overlap temporally. Introduction How single neurons in a network of neurons interact when processing information is likely to be a fundamental question central to understanding how real neural networks compute. In the mammalian nervous system we know that spatially adjacent neurons are, in general, more likely to interact, as well as receive common inputs.


Hierarchical Learning Control - An Approach with Neuron-Like Associative Memories

Neural Information Processing Systems

In this paper research of the second line is described: Starting from a neurophysiologically inspired model of stimulusresponse (SR)and/or associative memorization and a psychologically motivated ministructure for basic control tasks, preconditions and conditions are studied for cooperation of such units in a hierarchical organisation, as can be assumed to be the general layout of macrostructures in the brain. I. INTRODUCTION Theoretic modelling in brain theory is a highly speculative subject. However, it is necessary since it seems very unlikely to get a clear picture of this very complicated device by just analyzing theavailable measurements on sound and/or damaged brain parts only. As in general physics, one has to realize, that there are different levels of modelling: in physics stretching from the atomary levelover atom assemblies till up to general behavioural models like kinematics and mechanics, in brain theory stretching from chemical reactions over electrical spikes and neuronal cell assembly cooperation till general human behaviour. The research discussed in this paper is located just above the direct study of synaptic cooperation of neuronal cell assemblies as studied e. g. in /Amari 1988/. It takes into account the changes of synaptic weighting, without simulating the physical details of such changes, and makes use of a general imitation of learning situation (stimuli) - response connections for building up trainable basic control loops, which allow dynamic SR memorization and which are themsel ves elements of some more complex behavioural loops.


Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

Three chip designs are described: a hybrid digital/analog programmable connection matrix, an analog connection matrix with adjustable connection strengths, and a digital pipelined best-match chip. The common feature of the designs is the distribution of arithmetic processing power amongst the data storage to minimize data movement.




Learning on a General Network

Neural Information Processing Systems

LEARNING ON A GENERAL NETWORK Amir F. Atiya Department of Electrical Engineering California Institute of Technology Ca 91125 Abstract This paper generalizes the backpropagation method to a general network containing feedback t;onnections.The network model considered consists of interconnected groups of neurons, where each group could be fully interconnected (it could have feedback connections, with possibly asymmetricweights), but no loops between the groups are allowed. A stochastic descent algorithm is applied, under a certain inequality constraint on each intragroup weight matrix which ensures for the network to possess a unique equilibrium state for every input. Introduction Ithas been shown in the last few years that large networks of interconnected "neuron" -like elemp.nts One of the well-known neural network models is the backpropagation model [1]-[4]. It is an elegant way for teaching a layered feedforward network by a set of given input/output examples.


A Neural-Network Solution to the Concentrator Assignment Problem

Neural Information Processing Systems

Thispaper presents a neural-net solution to a resource allocation problem that arises in providing local access to the backbone of a wide-area communication network.The problem is described in terms of an energy function that can be mapped onto an analog computational network. Simulation results characterizing the performance of the neural computation are also presented. INTRODUCTION This paper presents a neural-network solution to a resource allocation problem that arises in providing access to the backbone of a communication network. 1 Inthe field of operations research, this problem was first known as the warehouse location problem and heuristics for finding feasible, suboptimal solutions have been developed previously.2.


HIGH DENSITY ASSOCIATIVE MEMORIES

Neural Information Processing Systems

A"'ir Dembo Information Systems Laboratory, Stanford University Stanford, CA 94305 Ofer Zeitouni Laboratory for Information and Decision Systems MIT, Cambridge, MA 02139 ABSTRACT A class of high dens ity assoc iat ive memories is constructed, starting from a description of desired properties those should exhib it. These propert ies include high capac ity, controllable bas ins of attraction and fast speed of convergence. Fortunately enough, the resulting memory is implementable by an artificial Neural Net. I NfRODUCTION Most of the work on assoc iat ive memories has been structure oriented, i.e.. given a Neural architecture, efforts were directed towards the analysis of the resulting network. Issues like capacity, basins of attractions, etc. were the main objects to be analyzed cf., e.g.


Neuromorphic Networks Based on Sparse Optical Orthogonal Codes

Neural Information Processing Systems

Synthetic neural nets[1,2] represent an active and growing research field. Fundamental issues, as well as practical implementations with electronic and optical devices are being studied. In addition, several learning algorithms have been studied, for example stochastically adaptivesystems[3] based on many-body physics optimization concepts[4,5]. Signal processing in the optical domain has also been an active field of research. A wide variety of nonlinear all-optical devices are being studied, directed towards applications bothin optical computating and in optical switching.