Neural Information Processing Systems
New Hardware for Massive Neural Networks
Coon, Darryl D., Perera, A. G. Unil
ABSTRACT Transient phenomena associated with forward biased silicon p - n - n structures at 4.2K show remarkable similarities with biological neurons. The devices play a role similar to the two-terminal switching elements in Hodgkin-Huxley equivalent circuit diagrams. The devices provide simpler and more realistic neuron emulation than transistors or op-amps. They have such low power and current requirements that they could be used in massive neural networks. Some observed properties of simple circuits containing the devices include action potentials, refractory periods, threshold behavior, excitation, inhibition, summation over synaptic inputs, synaptic weights, temporal integration, memory, network connectivity modification based on experience, pacemaker activity, firing thresholds, coupling to sensors with graded signal outputs and the dependence of firing rate on input current. Transfer functions for simple artificial neurons with spiketrain inputs and spiketrain outputs have been measured and correlated with input coupling.
A Computer Simulation of Cerebral Neocortex: Computational Capabilities of Nonlinear Neural Networks
Singer, Alexander, Donoghue, John P.
A synthetic neural network simulation of cerebral neocortex was developed based on detailed anatomy and physiology. Processing elements possess temporal nonlinearities and connection patterns similar to those of cortical neurons. The network was able to replicate spatial and temporal integration properties found experimentally in neocortex. A certain level of randomness was found to be crucial for the robustness of at least some of the network's computational capabilities. Emphasis was placed on how synthetic simulations can be of use to the study of both artificial and biological neural networks.
A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY
Chiueh, Tzi-Dar, Goodman, Rodney
An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code.
How Neural Nets Work
Lapedes, Alan S., Farber, Robert M.
Less work has been performed on using neural networks to process floating point numbers and it is sometimes stated that neural networks are somehow inherently inaccurate andtherefore best suited for "fuzzy" qualitative reasoning. Nevertheless, the potential speed of massively parallel operations make neural net "number crunching" an interesting topic to explore. In this paper we discuss some of our work in which we demonstrate that for certain applications neural networks can achieve significantly higher numerical accuracy than more conventional techniques. Inparticular, prediction of future values of a chaotic time series can be performed with exceptionally high accuracy. We analyze how a neural net is able to do this, and in the process show that a large class of functions from Rn. Rffl may be accurately approximated by a backpropagation neural net with just two "hidden" layers. The network uses this functional approximation to perform either interpolation (signal processing applications) or extrapolation (symbol processing applicationsJ. Neural nets therefore use quite familiar methods toperform.
Introduction to a System for Implementing Neural Net Connections on SIMD Architectures
TheSIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm that allows the formation of arbitrary connections between the "neurons". A feature is the ability to add new connections quickly.
On Properties of Networks of Neuron-Like Elements
Baldi, Pierre, Venkatesh, Santosh S.
In this article we consider two aspects of computation with neural networks. Firstly we consider the problem of the complexity of the network required to compute classes of specified (structured) functions. We give a brief overview of basic known complexity theoremsfor readers familiar with neural network models but less familiar with circuit complexity theories. We argue that there is considerable computational and physiological justification for the thesis that shallow circuits (Le., networks with relatively few layers) are computationally more efficient. We hence concentrate on structured (as opposed to random) problems that can be computed in shallow (constant depth)circuits with a relatively few number (polynomial) of elements, and demonstrate classes of structured problems that are amenable to such low cost solutions. Wediscuss an allied problem-the complexity of learning-and close with some open problems and a discussion of the observed limitations of the theoretical approach. Wenext turn to a rigourous classification of how much a network of given structure can do; i.e., the computational capacity of a given construct.
Presynaptic Neural Information Processing
Current knowledge about the activity dependence of the firing threshold, the conditions required for conduction failure, and the similarity of nodes along a single axon will be reviewed. An electronic circuit model for a site of low conduction safety in an axon will be presented. In response to single frequency stimulation the electronic circuit acts as a lowpass filter. I. INTRODUCTION The axon is often modeled as a wire which imposes a fixed delay on a propagating signal. Using this model, neural information processing is performed by synaptically sum m ing weighted contributions of the outputs from other neurons.