Goto

Collaborating Authors

 Technology



A Computer Simulation of Cerebral Neocortex: Computational Capabilities of Nonlinear Neural Networks

Neural Information Processing Systems

A synthetic neural network simulation of cerebral neocortex was developed based on detailed anatomy and physiology. Processing elements possess temporal nonlinearities and connection patterns similar to those of cortical neurons. The network was able to replicate spatial and temporal integration properties found experimentally in neocortex. A certain level of randomness was found to be crucial for the robustness of at least some of the network's computational capabilities. Emphasis was placed on how synthetic simulations can be of use to the study of both artificial and biological neural networks.



A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY

Neural Information Processing Systems

An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code.


How Neural Nets Work

Neural Information Processing Systems

Less work has been performed on using neural networks to process floating point numbers and it is sometimes stated that neural networks are somehow inherently inaccurate andtherefore best suited for "fuzzy" qualitative reasoning. Nevertheless, the potential speed of massively parallel operations make neural net "number crunching" an interesting topic to explore. In this paper we discuss some of our work in which we demonstrate that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques. Inparticular, prediction of future values of a chaotic time series can be performed with exceptionally high accuracy. We analyze how a neural net is able to do this, and in the process show that a large class of functions from Rn. Rffl may be accurately approximated by a backpropagation neural net with just two "hidden" layers. The network uses this functional approximation to perform either interpolation (signal processing applications) or extrapolation (symbol processing applicationsJ. Neural nets therefore use quite familiar methods toperform.


Introduction to a System for Implementing Neural Net Connections on SIMD Architectures

Neural Information Processing Systems

TheSIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm that allows the formation of arbitrary connections between the "neurons". A feature is the ability to add new connections quickly.



On Properties of Networks of Neuron-Like Elements

Neural Information Processing Systems

In this article we consider two aspects of computation with neural networks. Firstly we consider the problem of the complexity of the network required to compute classes of specified (structured) functions. We give a brief overview of basic known complexity theoremsfor readers familiar with neural network models but less familiar with circuit complexity theories. We argue that there is considerable computational and physiological justification for the thesis that shallow circuits (Le., networks with relatively few layers) are computationally more efficient. We hence concentrate on structured (as opposed to random) problems that can be computed in shallow (constant depth)circuits with a relatively few number (polynomial) of elements, and demonstrate classes of structured problems that are amenable to such low cost solutions. Wediscuss an allied problem-the complexity of learning-and close with some open problems and a discussion of the observed limitations of the theoretical approach. Wenext turn to a rigourous classification of how much a network of given structure can do; i.e., the computational capacity of a given construct.


Presynaptic Neural Information Processing

Neural Information Processing Systems

Current knowledge about the activity dependence of the firing threshold, the conditions required for conduction failure, and the similarity of nodes along a single axon will be reviewed. An electronic circuit model for a site of low conduction safety in an axon will be presented. In response to single frequency stimulation the electronic circuit acts as a lowpass filter. I. INTRODUCTION The axon is often modeled as a wire which imposes a fixed delay on a propagating signal. Using this model, neural information processing is performed by synaptically sum m ing weighted contributions of the outputs from other neurons.


A Trellis-Structured Neural Network

Neural Information Processing Systems

We have presented a locally interconnected network which minimizes a function that is analogous to the log likelihood function near the global minimum. The results of simulations demonstrate that the network can successfully decode input sequences containing no noise at least as well as the globally connected Hopfield-Tank [6] decomposition network.Simulations also strongly support the conjecture that in the noiseless case, the network can be guaranteed to converge to the global minimum. In addition, for low error rates, the network can also decode noisy received sequences. We have been able to apply the Cohen-Grossberg proof of the stability of "oncenter off-surround"networks to show that each stage will maximize the desired local "likelihood" for noisy received sequences. We have also shown that, in the large gain limit, the network as a whole is stable and that the equilibrium points correspond to the MLSE decoder output.