Goto

Collaborating Authors

 Country


Classifying Facial Action

Neural Information Processing Systems

Measurement of facial expressions is important for research and assessment psychiatry, neurology,and experimental psychology (Ekman, Huang, Sejnowski, & Hager, 1992), and has technological applications in consumer-friendly user interfaces, interactive videoand entertainment rating. The Facial Action Coding System (FACS) is a method for measuring facial expressions in terms of activity in the underlying facial muscles (Ekman & Friesen, 1978). We are exploring ways to automate FACS.


Stable LInear Approximations to Dynamic Programming for Stochastic Control Problems with Local Transitions

Neural Information Processing Systems

Recently, however, there have been some successful applications of neural networks in a totally different context - that of sequential decision making under uncertainty (stochastic control). Stochastic control problems have been studied extensively in the operations research and control theory literature for a long time, using the methodology of dynamic programming [Bertsekas, 1995]. In dynamic programming, the most important object is the cost-to-go (or value) junction, which evaluates the expected future 1046 B.V. ROY, 1. N. TSITSIKLIS


Using Feedforward Neural Networks to Monitor Alertness from Changes in EEG Correlation and Coherence

Neural Information Processing Systems

We report here that changes in the normalized electroencephalographic (EEG)cross-spectrum can be used in conjunction with feedforward neural networks to monitor changes in alertness of operators continuouslyand in near-real time. Previously, we have shown that EEG spectral amplitudes covary with changes in alertness asindexed by changes in behavioral error rate on an auditory detection task [6,4]. Here, we report for the first time that increases in the frequency of detection errors in this task are also accompanied bypatterns of increased and decreased spectral coherence in several frequency bands and EEG channel pairs. Relationships between EEG coherence and performance vary between subjects, but within subjects, their topographic and spectral profiles appear stable from session to session. Changes in alertness also covary with changes in correlations among EEG waveforms recorded at different scalp sites, and neural networks can also estimate alertness fromcorrelation changes in spontaneous and unobtrusivelyrecorded EEGsignals. 1 Introduction When humans become drowsy, EEG scalp recordings of potential oscillations change dramatically in frequency, amplitude, and topographic distribution [3]. These changes are complex and differ between subjects [10]. Recently, we have shown 932 S.MAKEIG, T.-P.


Tempering Backpropagation Networks: Not All Weights are Created Equal

Neural Information Processing Systems

Backpropagation learning algorithms typically collapse the network's structure into a single vector of weight parameters to be optimized. We suggest that their performance may be improved by utilizing the structural informationinstead of discarding it, and introduce a framework for ''tempering'' each weight accordingly. In the tempering model, activation and error signals are treated as approximately independentrandom variables. The characteristic scale of weight changes is then matched to that ofthe residuals, allowing structural properties suchas a node's fan-in and fan-out to affect the local learning rate and backpropagated error. The model also permits calculation of an upper bound on the global learning rate for batch updates, which in turn leads to different update rules for bias vs. non-bias weights. This approach yields hitherto unparalleled performance on the family relations benchmark,a deep multi-layer network: for both batch learning with momentum and the delta-bar-delta algorithm, convergence at the optimal learning rate is sped up by more than an order of magnitude.


Beating a Defender in Robotic Soccer: Memory-Based Learning of a Continuous Function

Neural Information Processing Systems

Our research works towards this broad goal from a Machine Learning perspective. We are particularly interested in investigating how an intelligent agentcan choose an action in an adversarial environment. We assume that the agent has a specific goal to achieve. We conduct this investigation in a framework whereteams of agents compete in a game of robotic soccer. The real system of model cars remotely controlled from off-board computers is under development.


Dynamics of Attention as Near Saddle-Node Bifurcation Behavior

Neural Information Processing Systems

Most studies of attention have focused on the selection process of incoming sensory cues (Posner et al., 1980; Koch et al., 1985; Desimone et al., 1995). Emphasis was placed on the phenomena of causing different percepts for the same sensory stimuli. However, the selection of sensory input itself is not the final goal of attention. We consider attention as a means for goal-directed behavior and survival of the animal. In this view, dynamical properties of attention are crucial. While attention has to be maintained long enough to enable robust response to sensory input, it also has to be shifted quickly to a novel cue that is potentially important. Long-term maintenance and quick transition are critical requirements for attention dynamics.


Modern Analytic Techniques to Solve the Dynamics of Recurrent Neural Networks

Neural Information Processing Systems

We describe the use of modern analytical techniques in solving the dynamics of symmetric and nonsymmetric recurrent neural networks nearsaturation. These explicitly take into account the correlations betweenthe post-synaptic potentials, and thereby allow for a reliable prediction of transients. 1 INTRODUCTION Recurrent neural networks have been rather popular in the physics community, because they lend themselves so naturally to analysis with tools from equilibrium statistical mechanics. This was the main theme of physicists between, say, 1985 and 1990. Less familiar to the neural network community is a subsequent wave of theoretical physical studies, dealing with the dynamics of symmetric and nonsymmetric recurrentnetworks. The strategy here is to try to describe the processes at a reduced level of an appropriate small set of dynamic macroscopic observables.


Temporal coding in the sub-millisecond range: Model of barn owl auditory pathway

Neural Information Processing Systems

Binaural coincidence detection is essential for the localization of external sounds and requires auditory signal processing with high temporal precision. We present an integrate-and-fire model of spike processing in the auditory pathway of the barn owl. It is shown that a temporal precision in the microsecond range can be achieved with neuronal time constants which are at least one magnitude longer. An important feature of our model is an unsupervised Hebbian learning rule which leads to a temporal fine tuning of the neuronal connections.


Human Reading and the Curse of Dimensionality

Neural Information Processing Systems

Whereas optical character recognition (OCR) systems learn to classify singlecharacters; people learn to classify long character strings in parallel, within a single fixation. This difference is surprising because high dimensionality is associated with poor classification learning. This paper suggests that the human reading system avoids these problems because the number of to-be-classified images isreduced by consistent and optimal eye fixation positions, and by character sequence regularities. An interesting difference exists between human reading and optical character recognition (OCR)systems. The input/output dimensionality of character classification in human reading is much greater than that for OCR systems (see Figure 1) . OCR systems classify one character at time; while the human reading system classifies as many as 8-13 characters per eye fixation (Rayner, 1979) and within a fixation, character category and sequence information is extracted in parallel (Blanchard, McConkie, Zola, and Wolverton, 1984; Reicher, 1969).


Stochastic Hillclimbing as a Baseline Method for Evaluating Genetic Algorithms

Neural Information Processing Systems

We investigate the effectiveness of stochastic hillclimbing as a baseline for the performance of genetic algorithms (GAs) as combinatorialevaluating In particular, we address two problems to whichfunction optimizers.