Not enough data to create a plot.
Try a different view from the menu above.
Industry
Classification of Electroencephalogram using Artificial Neural Networks
Tsoi, A C, So, D S C, Sergejew, A
In this paper, we will consider the problem of classifying electroencephalogram (EEG)signals of normal subjects, and subjects suffering from psychiatric disorder, e.g., obsessive compulsive disorder, schizophrenia, using a class of artificial neural networks, viz., multi-layer perceptron. It is shown that the multilayer perceptron is capable of classifying unseen test EEG signals to a high degree of accuracy.
Digital Boltzmann VLSI for constraint satisfaction and learning
Murray, Michael, Leung, Ming-Tak, Boonyanit, Kan, Kritayakirana, Kong, Burg, James B., Wolff, Gregory J., Watanabe, Tokahiro, Schwartz, Edward, Stork, David G., Peterson, Allen M.
We built a high-speed, digital mean-field Boltzmann chip and SBus board for general problems in constraint satjsfaction and learning. Each chip has 32 neural processors and 4 weight update processors, supporting an arbitrary topology of up to 160 functional neurons. On-chip learning is at a theoretical maximum rate of 3.5 x 108 connection updates/sec;recall is 12000 patterns/sec for typical conditions. The chip's high speed is due to parallel computation of inner products, limited (but adequate) precision for weights and activations (5bits), fast clock (125 MHz), and several design insights.
Computational Elements of the Adaptive Controller of the Human Arm
Shadmehr, Reza, Mussa-Ivaldi, Ferdinando A.
We consider the problem of how the CNS learns to control dynamics ofa mechanical system. By using a paradigm where a subject's hand interacts with a virtual mechanical environment, we show that learning control is via composition of a model of the imposed dynamics. Some properties of the computational elements with which the CNS composes this model are inferred through the generalization capabilitiesof the subject outside the training data. 1 Introduction At about the age of three months, children become interested in tactile exploration of objects around them. They attempt to reach for an object, but often fail to properly control their arm and end up missing their target. In the ensuing weeks, they rapidly improve and soon they can not only reach accurately, they can also pick up the object and place it.
Signature Verification using a "Siamese" Time Delay Neural Network
Bromley, Jane, Guyon, Isabelle, LeCun, Yann, Säckinger, Eduard, Shah, Roopak
The aim of the project was to make a signature verification system based on the NCR 5990 Signature Capture Device (a pen-input tablet) and to use 80 bytes or less for signature feature storage in order that the features can be stored on the magnetic strip of a credit-card. Verification using a digitizer such as the 5990, which generates spatial coordinates as a function of time, is known as dynamic verification. Much research has been carried out on signature verification. Function-based methods, which fit a function tothe pen trajectory, have been found to lead to higher performance while parameter-based methods, which extract some number of parameters from a signa-737 738 Bromley, Guyon, Le Cun, Sackinger, and Shah ture, make a lower requirement on memory space for signature storage (see Lorette and Plamondon (1990) for comments). We chose to use the complete time extent of the signature, with the preprocessing described below, as input to a neural network, andto allow the network to compress the information.
Neurobiology, Psychophysics, and Computational Models of Visual Attention
Niebur, Ernst, Olshausen, Bruno A.
Olshausen Department of Anatomy and Neurobiology Washington University School of Medicine St. Louis, MO 63110 The purpose of this workshop was to discuss both recent experimental findings and computational models of the neurobiological implementation of selective attention. Recent experimental results were presented in two of the four presentations given (C.E. Connor, Washington University and B.C. Motter, SUNY and V.A. Medical Center, Syracuse), while the other two talks were devoted to computational models (E. Connor presented the results of an experiment in which the receptive field profiles of V 4 neurons were mapped during different states of attention in an awake, behaving monkey. The attentional focus was manipulated in this experiment by altering the position of a behaviorally relevant ring-shaped stimulus.
Foraging in an Uncertain Environment Using Predictive Hebbian Learning
Montague, P. Read, Dayan, Peter, Sejnowski, Terrence J.
Survival is enhanced by an ability to predict the availability of food, the likelihood of predators, and the presence of mates. We present a concrete model that uses diffuse neurotransmitter systems to implement a predictive version of a Hebb learning rule embedded in a neural architecture basedon anatomical and physiological studies on bees. The model captured the strategies seen in the behavior of bees and a number of other animals when foraging in an uncertain environment. The predictive model suggests a unified way in which neuromodulatory influences can be used to bias actions and control synaptic plasticity. Successful predictions enhance adaptive behavior by allowing organisms to prepare for future actions,rewards, or punishments. Moreover, it is possible to improve upon behavioral choices if the consequences of executing different actions can be reliably predicted. Although classicaland instrumental conditioning results from the psychological literature [1] demonstrate that the vertebrate brain is capable of reliable prediction, how these predictions are computed in brains is not yet known. The brains of vertebrates and invertebrates possess small nuclei which project axons throughout large expanses of target tissue and deliver various neurotransmitters such as dopamine, norepinephrine, and acetylcholine [4]. The activity in these systems may report on reinforcing stimuli in the world or may reflect an expectation of future reward [5, 6,7,8].
Inverse Dynamics of Speech Motor Control
Hirayama, Makoto, Vatikiotis-Bateson, Eric, Kawato, Mitsuo
This inverse dynamics model allows the use of a faster speech mot.or control scheme, which can be applied to phoneme-tospeech synthesisvia musclo-skeletal system dynamics, or to future use in speech recognition. The forward acoustic model, which is the mapping from articulator trajectories t.o the acoustic parameters, was improved by adding velocity and voicing information inputs to distinguish acollst.ic
Clustering with a Domain-Specific Distance Measure
Gold, Steven, Mjolsness, Eric, Rangarajan, Anand
Critical features of a domain (such as invariance under translation, rotation, and permu- Clustering with a Domain-Specific Distance Measure 103 tation) are captured within the clustering procedure, rather than reflected in the properties of feature sets created prior to clustering. The distance measure and learning problem are formally described as nested objective functions. We derive an efficient algorithm by using optimization techniques that allow us to divide up the objective function into parts which may be minimized in distinct phases. The algorithm has accurately recreated 10 prototypes from a randomly generated sample database of 100 images consisting of 20 points each in 120 experiments. Finally, by incorporating permutation invariance in our distance measure, we have a technique that we may be able to apply to the clustering of graphs. Our goal is to develop measures which will enable the learning of objects with shape or structure. Acknowledgements This work has been supported by AFOSR grant F49620-92-J-0465 and ONR/DARPA grant N00014-92-J-4048.
Grammatical Inference by Attentional Control of Synchronization in an Oscillating Elman Network
Baird, Bill, Troyer, Todd, Eeckman, Frank
We show how an "Elman" network architecture, constructed from recurrently connected oscillatory associative memory network modules, canemploy selective "attentional" control of synchronization to direct the flow of communication and computation within the architecture to solve a grammatical inference problem. Previously we have shown how the discrete time "Elman" network algorithm can be implemented in a network completely described by continuous ordinary differential equations. The time steps (machine cycles)of the system are implemented by rhythmic variation (clocking) of a bifurcation parameter. In this architecture, oscillation amplitudecodes the information content or activity of a module (unit), whereas phase and frequency are used to "softwire" the network. Only synchronized modules communicate by exchanging amplitudeinformation; the activity of non-resonating modules contributes incoherent crosstalk noise. Attentional control is modeled as a special subset of the hidden modules with ouputs which affect the resonant frequencies of other hidden modules. They control synchrony among the other modules anddirect the flow of computation (attention) to effect transitions betweentwo subgraphs of a thirteen state automaton which the system emulates to generate a Reber grammar. The internal crosstalk noise is used to drive the required random transitions of the automaton.