Goto

Collaborating Authors

 Neural Information Processing Systems


A Novel Net that Learns Sequential Decision Process

Neural Information Processing Systems

We propose a new scheme to construct neural networks to classify patterns. Thenew scheme has several novel features: 1. We focus attention on the important attributes of patterns in ranking order.


Schema for Motor Control Utilizing a Network Model of the Cerebellum

Neural Information Processing Systems

Asa means of probing these cerebellar mechanisms, my colleagues and I have been conducting microelectrode studies of the neural messages that flow through the intermediate divisionof the cerebellum and onward to limb muscles via the rubrospinal tract. We regard this cerebellorubrospinal pathwayas a useful model system for studying general problems of sensorimotor integration and adaptive brain function.


Capacity for Patterns and Sequences in Kanerva's SDM as Compared to Other Associative Memory Models

Neural Information Processing Systems

ABSTRACT The information capacity of Kanerva's Sparse, Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total informationstored in these systems is proportional to the number connections in the network. Theproportionality constant is the same for the SDM and HopJreld-type models independent ofthe particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences ofspatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns. INTRODUCTION Many different models of memory and thought have been proposed by scientists over the years.


Analysis of Distributed Representation of Constituent Structure in Connectionist Systems

Neural Information Processing Systems

A general method, the tensor product representation, is described for the distributed representation of value/variable bindings. The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it penn its recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them. Introduction Any model of complex infonnation processing in networks of simple processors must solve the problem of representing complex structures over network elements. Connectionist models of realistic natural language processing, for example, must employ computationally adequate representations of complex sentences. Many connectionists feel that to develop connectionist systems with the computational power required by complex tasks, distributed representations must be used: an individual processing unit must participate in the representation of multiple items, and each item must be represented as a pattern of activity of multiple processors. Connectionist models have used more or less distributed representations of more or less complex structures, but little if any general analysis of the problem of distributed representation of complex infonnation has been carried out This paper reports results of an analysis of a general method called the tensor product representation.



Phase Transitions in Neural Networks

Neural Information Processing Systems

For related finite array models classical phase transi t.ions (which describe steady-state behavior) may not.


An Optimization Network for Matrix Inversion

Neural Information Processing Systems

Box 150, Cheongryang, Seoul, Korea ABSTRACT Inverse matrix calculation can be considered as an optimization. We have demonstrated that this problem can be rapidly solved by highly interconnected simple neuron-like analog processors. A network for matrix inversion based on the concept of Hopfield's neural network was designed, and implemented with electronic hardware. With slight modifications, the network is readily applicable to solving a linear simultaneous equation efficiently. Notable features of this circuit are potential speed due to parallel processing, and robustness against variations of device parameters.


On the Power of Neural Networks for Solving Hard Problems

Neural Information Processing Systems

The neural network model is a discrete time system that can be represented by a weighted and undirected graph. There is a weight attached to each edge of the graph and a threshold value attached to each node (neuron) of the graph.



Introduction to a System for Implementing Neural Net Connections on SIMD Architectures

Neural Information Processing Systems

INTRODUCTION TO A SYSTEM FOR IMPLEMENTING NEURAL NET CONNECTIONS ON SIMD ARCHITECTURES Sherryl Tomboulian Institute for Computer Applications in Science and Engineering NASA Langley Research Center, Hampton VA 23665 ABSTRACT Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm that allows the formation of arbitrary connections between the "neurons". A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network.