Country
A Dynamical Approach to Temporal Pattern Processing
Stornetta, W. Scott, Hogg, Tad, Huberman, Bernardo A.
W. Scott Stornetta Stanford University, Physics Department, Stanford, Ca., 94305 Tad Hogg and B. A. Huberman Xerox Palo Alto Research Center, Palo Alto, Ca. 94304 ABSTRACT Recognizing patterns with temporal context is important for such tasks as speech recognition, motion detection and signature verification. We propose an architecture in which time serves as its own representation, and temporal context is encoded in the state of the nodes. We contrast this with the approach of replicating portions of the architecture to represent time. As one example of these ideas, we demonstrate an architecture with capacitive inputs serving as temporal feature detectors in an otherwise standard back propagation model. Experiments involving motion detection and word discrimination serve to illustrate novel features of the system.
Presynaptic Neural Information Processing
ABSTRACT The potential for presynaptic information processing within the arbor of a single axon will be discussed in this paper. Current knowledge about the activity dependence of the firing threshold, the conditions required for conduction failure, and the similarity of nodes along a single axon will be reviewed. An electronic circuit model for a site of low conduction safety in an axon will be presented. In response to single frequency stimulation the electronic circuit acts as a lowpass filter. I. INTRODUCTION The axon is often modeled as a wire which imposes a fixed delay on a propagating signal.
Neural Net and Traditional Classifiers
Huang, William Y., Lippmann, Richard P.
Previous work on nets with continuous-valued inputs led to generative procedures to construct convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers). Here we demonstrate that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Such classifiers are robust, train rapidly, and provide good performance with simple decision regions. When complex decision regions are required, however, convergence time can be excessively long and performance is often no better than that of k-nearest neighbor classifiers. Three neural net classifiers are presented that provide more rapid training under such situations. Two use fixed weights in the first one or two layers and are similar to classifiers that estimate probability density functions using histograms. A third "feature map classifier" uses both unsupervised and supervised training. It provides good performance with little supervised training in situations such as speech recognition where much unlabeled training data is available. The architecture of this classifier can be used to implement a neural net k-nearest neighbor classifier.
The Hopfield Model with Multi-Level Neurons
The generalization replaces two state neurons by neurons taking a richer set of values. Two classes of neuron input output relations are developed guaranteeing convergence to stable states. The first is a class of "continuous" relations and the second is a class of allowed quantization rules for the neurons.
Analysis and Comparison of Different Learning Algorithms for Pattern Association Problems
ANALYSIS AND COMPARISON OF DIFFERENT LEARNING ALGORITHMS FOR PATTERN ASSOCIATION PROBLEMS J. Bernasconi Brown Boveri Research Center CH-S40S Baden, Switzerland ABSTRACT We investigate the behavior of different learning algorithms for networks of neuron-like units. As test cases we use simple pattern association problems, such as the XOR-problem and symmetry detection problems. The algorithms considered are either versions of the Boltzmann machine learning rule or based on the backpropagation of errors. We also propose and analyze a generalized delta rule for linear threshold units. We find that the performance of a given learning algorithm depends strongly on the type of units used.
Temporal Patterns of Activity in Neural Networks
Patterns of activity over real neural structures are known to exhibit timedependent behavior. It would seem that the brain may be capable of utilizing temporal behavior of activity in neural networks as a way of performing functions which cannot otherwise be easily implemented. These might include the origination of sequential behavior and the recognition of time-dependent stimuli. A model is presented here which uses neuronal populations with recurrent feedback connections in an attempt to observe and describe the resulting time-dependent behavior. Shortcomings and problems inherent to this model are discussed. Current models by other researchers are reviewed and their similarities and differences discussed.
Cycles: A Simulation Tool for Studying Cyclic Neural Networks
CYCLES: A Simulation Tool for Studying Cyclic Neural Networks Michael T. Gately Texas Instruments Incorporated, Dallas, TX 75265 ABSTRACT A computer program has been designed and implemented to allow a researcher to analyze the oscillatory behavior of simulated neural networks with cyclic connectivity. The computer program, implemented on the Texas Instruments Explorer / Odyssey system, and the results of numerous experiments are discussed. The program, CYCLES, allows a user to construct, operate, and inspect neural networks containing cyclic connection paths with the aid of a powerful graphicsbased interface. Numerous cycles have been studied, including cycles with one or more activation points, non-interruptible cycles, cycles with variable path lengths, and interacting cycles. The final class, interacting cycles, is important due to its ability to implement time-dependent goal processing in neural networks.
Correlational Strength and Computational Algebra of Synaptic Connections Between Neurons
ABSTRACT Intracellular recordings in spinal cord motoneurons and cerebral cortex neurons have provided new evidence on the correlational strength of monosynaptic connections, and the relation between the shapes of postsynaptic potentials and the associated increased firing probability. In these cells, excitatory postsynaptic potentials (EPSPs) produce crosscorrelogram peaks which resemble in large part the derivative of the EPSP. Additional synaptic noise broadens the peak, but the peak area -- i.e., the number of above-chance firings triggered per EPSP -- remains proportional to the EPSP amplitude. The consequences of these data for information processing by polysynaptic connections is discussed. The effects of sequential polysynaptic links can be calculated by convolving the effects of the underlying monosynaptic connections.
Distributed Neural Information Processing in the Vestibulo-Ocular System
Lau, Clifford, Honrubia, Vicente
DISTRIBUTED NEURAL INFORMATION PROCESSING IN THE VESTIBULO-OCULAR SYSTEM Clifford Lau Office of Naval Research Detach ment Pasadena, CA 91106 Vicente Honrubia* UCLA Division of Head and Neck Surgery Los Angeles, CA 90024 ABSTRACT A new distributed neural information-processing model is proposed to explain the response characteristics of the vestibulo-ocular system and to reflect more accurately the latest anatomical and neurophysiological data on the vestibular afferent fibers and vestibular nuclei. In this model, head motion is sensed topographically by hair cells in the semicircular canals. Hair cell signals are then processed by multiple synapses in the primary afferent neurons which exhibit a continuum of varying dynamics. The model is an application of the concept of "multilayered" neural networks to the description of findings in the bullfrog vestibular nerve, and allows us to formulate mathematically the behavior of an assembly of neurons whose physiological characteristics vary according to their anatomical properties. INTRODUCTION Traditionally the physiological properties of individual vestibular afferent neurons have been modeled as a linear time-invariant system based on Steinhausents description of cupular motion.