Goto

Collaborating Authors

 United States


Time-Warping Network: A Hybrid Framework for Speech Recognition

Neural Information Processing Systems

Such systems attempt to combine the best features of both models: the temporal structure of HMMs and the discriminative power of neural networks. In this work we define a time-warping (1W) neuron that extends the operation of the fonnal neuron of a back-propagation network by warping the input pattern to match it optimally to its weights. We show that a single-layer network of TW neurons is equivalent to a Gaussian density HMMbased recognitionsystem.


The Efficient Learning of Multiple Task Sequences

Neural Information Processing Systems

I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs. 1 INTRODUCTION Most applications of domain independent learning algorithms have focussed on learning single tasks. Building more sophisticated learning agents that operate in complex environments will require handling multiple tasks/goals (Singh, 1992). Research effort on the scaling problem has concentrated on discovering faster learning algorithms, and while that will certainly help, techniques that allow transfer of learning across tasks will be indispensable for building autonomous learning agents that have to learn to solve multiple tasks. In this paper I consider a learning agent that interacts with an external, finite-state, discrete-time, stochastic dynamical environment and faces multiple sequences of Markovian decision tasks (MDTs).


VISIT: A Neural Model of Covert Visual Attention

Neural Information Processing Systems

Visual attention is the ability to dynamically restrict processing to a subset of the visual field. Researchers have long argued that such a mechanism is necessary to efficiently perform many intermediate level visual tasks. This paper describes VISIT, a novel neural network model of visual attention.


Models Wanted: Must Fit Dimensions of Sleep and Dreaming

Neural Information Processing Systems

During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper.


HARMONET: A Neural Net for Harmonizing Chorales in the Style of J. S. Bach

Neural Information Processing Systems

The chord skeleton is obtained if eighth and sixteenth notes are viewed as omitable ornamentations. Furthermore, if the chords are conceived as harmonies with certain attributes such as "inversion" or "characteristic dissonances", the chorale is reducible to its harmonic skeleton, a thoroughbass-like representation (Figure 2).



CCD Neural Network Processors for Pattern Recognition

Neural Information Processing Systems

A CCD-based processor that we call the NNC2 is presented. The NNC2 implements a fully connected 192-input, 32-output two-layer network and can be cascaded to form multilayer networks or used in parallel for additional input or output nodes. The device computes 1.92 x 10


Unsupervised Classifiers, Mutual Information and 'Phantom Targets

Neural Information Processing Systems

We derive criteria for training adaptive classifier networks to perform unsupervised data analysis. The first criterion turns a simple Gaussian classifier into a simple Gaussian mixture analyser. The second criterion, which is much more generally applicable, is based on mutual information.


Locomotion in a Lower Vertebrate: Studies of the Cellular Basis of Rhythmogenesis and Oscillator Coupling

Neural Information Processing Systems

To test whether the known connectivies of neurons in the lamprey spinal cord are sufficient to account for locomotor rhythmogenesis, a CCconnectionist" neural network simulation was done using identical cells connected according to experimentally established patterns. It was demonstrated that the network oscillates in a stable manner with the same phase relationships among the neurons as observed in the lamprey. The model was then used to explore coupling between identical?scillators. It was concluded that the neurons can have a dual role as rhythm generators and as coordinators between oscillators to produce the phase relations observed among segmental oscillators during swimming.


A Computational Mechanism to Account for Averaged Modified Hand Trajectories

Neural Information Processing Systems

Using the double-step target displacement paradigm the mechanisms underlying arm trajectory modification were investigated. Using short (10-110 msec) inter-stimulus intervals the resulting hand motions were initially directed in between the first and second target locations. The kinematic features of the modified motions were accounted for by the superposition scheme, which involves the vectorial addition of two independent point-topoint motion units: one for moving the hand toward an internally specified location and a second one for moving between that location and the final target location. The similarity between the inferred internally specified locations and previously reported measured endpoints of the first saccades in double-step eye-movement studies may suggest similarities between perceived target locations in eye and hand motor control.