Goto

Collaborating Authors

 Neural Information Processing Systems


The Hopfield Model with Multi-Level Neurons

Neural Information Processing Systems

The generalization replaces two state neurons by neurons taking a richer set of values. Two classes of neuron input output relations are developed guaranteeing convergence to stable states. The first is a class of "continuous" relations and the second is a class of allowed quantization rules for the neurons.


Analysis and Comparison of Different Learning Algorithms for Pattern Association Problems

Neural Information Processing Systems

ANALYSIS AND COMPARISON OF DIFFERENT LEARNING ALGORITHMS FOR PATTERN ASSOCIATION PROBLEMS J. Bernasconi Brown Boveri Research Center CH-S40S Baden, Switzerland ABSTRACT We investigate the behavior of different learning algorithms for networks of neuron-like units. As test cases we use simple pattern association problems, such as the XOR-problem and symmetry detection problems. The algorithms considered are either versions of the Boltzmann machine learning rule or based on the backpropagation of errors. We also propose and analyze a generalized delta rule for linear threshold units. We find that the performance of a given learning algorithm depends strongly on the type of units used.


Temporal Patterns of Activity in Neural Networks

Neural Information Processing Systems

Patterns of activity over real neural structures are known to exhibit timedependent behavior. It would seem that the brain may be capable of utilizing temporal behavior of activity in neural networks as a way of performing functions which cannot otherwise be easily implemented. These might include the origination of sequential behavior and the recognition of time-dependent stimuli. A model is presented here which uses neuronal populations with recurrent feedback connections in an attempt to observe and describe the resulting time-dependent behavior. Shortcomings and problems inherent to this model are discussed. Current models by other researchers are reviewed and their similarities and differences discussed.


Minkowski-r Back-Propagation: Learning in Connectionist Models with Non-Euclidian Error Signals

Neural Information Processing Systems

It can be shown that neural-like networks containing a single hidden layer of nonlinear activation units can learn to do a piece-wise linear partitioning of a feature space [2]. One result of such a partitioning is a complex gradient surface on which decisions about new input stimuli will be made. The generalization, categorization and clustering propenies of the network are therefore detennined by this mapping of input stimuli to this gradient swface in the output space. This gradient swface is a function of the conditional probability distributions of the output vectors given the input feature vectors as well as a function of the error relating the teacher signal and output.


Cycles: A Simulation Tool for Studying Cyclic Neural Networks

Neural Information Processing Systems

CYCLES: A Simulation Tool for Studying Cyclic Neural Networks Michael T. Gately Texas Instruments Incorporated, Dallas, TX 75265 ABSTRACT A computer program has been designed and implemented to allow a researcher to analyze the oscillatory behavior of simulated neural networks with cyclic connectivity. The computer program, implemented on the Texas Instruments Explorer / Odyssey system, and the results of numerous experiments are discussed. The program, CYCLES, allows a user to construct, operate, and inspect neural networks containing cyclic connection paths with the aid of a powerful graphicsbased interface. Numerous cycles have been studied, including cycles with one or more activation points, non-interruptible cycles, cycles with variable path lengths, and interacting cycles. The final class, interacting cycles, is important due to its ability to implement time-dependent goal processing in neural networks.


Correlational Strength and Computational Algebra of Synaptic Connections Between Neurons

Neural Information Processing Systems

ABSTRACT Intracellular recordings in spinal cord motoneurons and cerebral cortex neurons have provided new evidence on the correlational strength of monosynaptic connections, and the relation between the shapes of postsynaptic potentials and the associated increased firing probability. In these cells, excitatory postsynaptic potentials (EPSPs) produce crosscorrelogram peaks which resemble in large part the derivative of the EPSP. Additional synaptic noise broadens the peak, but the peak area -- i.e., the number of above-chance firings triggered per EPSP -- remains proportional to the EPSP amplitude. The consequences of these data for information processing by polysynaptic connections is discussed. The effects of sequential polysynaptic links can be calculated by convolving the effects of the underlying monosynaptic connections.


Distributed Neural Information Processing in the Vestibulo-Ocular System

Neural Information Processing Systems

DISTRIBUTED NEURAL INFORMATION PROCESSING IN THE VESTIBULO-OCULAR SYSTEM Clifford Lau Office of Naval Research Detach ment Pasadena, CA 91106 Vicente Honrubia* UCLA Division of Head and Neck Surgery Los Angeles, CA 90024 ABSTRACT A new distributed neural information-processing model is proposed to explain the response characteristics of the vestibulo-ocular system and to reflect more accurately the latest anatomical and neurophysiological data on the vestibular afferent fibers and vestibular nuclei. In this model, head motion is sensed topographically by hair cells in the semicircular canals. Hair cell signals are then processed by multiple synapses in the primary afferent neurons which exhibit a continuum of varying dynamics. The model is an application of the concept of "multilayered" neural networks to the description of findings in the bullfrog vestibular nerve, and allows us to formulate mathematically the behavior of an assembly of neurons whose physiological characteristics vary according to their anatomical properties. INTRODUCTION Traditionally the physiological properties of individual vestibular afferent neurons have been modeled as a linear time-invariant system based on Steinhausents description of cupular motion.


Presynaptic Neural Information Processing

Neural Information Processing Systems

ABSTRACT The potential for presynaptic information processing within the arbor of a single axon will be discussed in this paper. Current knowledge about the activity dependence of the firing threshold, the conditions required for conduction failure, and the similarity of nodes along a single axon will be reviewed. An electronic circuit model for a site of low conduction safety in an axon will be presented. In response to single frequency stimulation the electronic circuit acts as a lowpass filter. I. INTRODUCTION The axon is often modeled as a wire which imposes a fixed delay on a propagating signal.


Experimental Demonstrations of Optical Neural Computers

Neural Information Processing Systems

The high interconnectivity required by neural computers can be simply implemented in optics because channels for optical signals may be superimposed in three dimensions with little or no cross coupling. Since these channels may be formed holographically, optical neural systems can be designed to create and maintain interconnections very simply. Thus the optical system designer can to a large extent avoid the analytical and topological problems of determining individual interconnections for a given neural system and constructing physical paths for these interconnections. An archetypical design for a single layer of an optical neural computer is shown in Figure 1. Nonlinear thresholding elements, neurons, are arranged on two dimensional planes which are interconnected via the third dimension by holographic elements. The key concerns in implementing this design involve the need for suitable nonlinearities for the neural planes and high capacity, easily modifiable holographic elements. While it is possible to implement the neural function using entirely optical nonlinearities, for example using etalon arrays\ optoelectronic two dimensional spatial light modulators (2D SLMs) suitable for this purpose are more readily available.


Bit-Serial Neural Networks

Neural Information Processing Systems

This arises from the parallelism and distributed knowledge representation which gives rise to gentle degradation as faults appear. These functions are attractive to implementation in VLSI and WSI. For example, the natural fault - tolerance could be useful in silicon wafers with imperfect yield, where the network degradation is approximately proportional to the non-functioning silicon area. To cast neural networks in engineering language, a neuron is a state machine that is either "on" or "off', which in general assumes intermediate states as it switches smoothly between these extrema. The synapses weighting the signals from a transmitting neuron such that it is more or less excitatory or inhibitory to the receiving neuron. The set of synaptic weights determines the stable states and represents the learned information in a system. The neural state, VI' is related to the total neural activity stimulated by inputs to the neuron through an activation junction, F. Neural activity is the level of excitation of the neuron and the activation is the way it reacts in a response to a change in activation.