Not enough data to create a plot.
Try a different view from the menu above.
Oscillatory Neural Fields for Globally Optimal Path Planning
A neural network solution is proposed for solving path planning problems The proposed network is a two-dimensional sheetfaced by mobile robots. of neurons forming a distributed representation of the robot's workspace. Lateral interconnections between neurons are "cooperative", so that the network exhibits oscillatory behaviour. These oscillations are used to generate solutions of Bellman's dynamic programming equation in the context of path planning. Simulation experiments imply that these networks locate paths even in the presence of substantial levels of circuitglobal optimal nOlse. 1 Dynamic Programming and Path Planning Consider a 2-DOF robot moving about in a 2-dimensional world. A robot's location is denoted by the real vector, p.
Green's Function Method for Fast On-Line Learning Algorithm of Recurrent Neural Networks
Sun, Guo-Zheng, Chen, Hsing-Hen, Lee, Yee-Chun
The two well known learning algorithms of recurrent neural networks are the back-propagation (Rumelhart & el al., Werbos) and the forward propagation (Williamsand Zipser). The main drawback of back-propagation is its off-line backward path in time for error cumulation. This violates the online requirement in many practical applications. Although the forward propagation algorithmcan be used in an online manner, the annoying drawback is the heavy computation load required to update the high dimensional sensitivity matrix(0(fir) operations for each time step). Therefore, to develop a fast forward algorithm is a challenging task.
Improved Hidden Markov Model Speech Recognition Using Radial Basis Function Networks
Singer, Elliot, Lippmann, Richard P.
The RBF network consists of an input layer, a hidden layer composed of Gaussian basis functions, and an output layer. Connections from the input layer to the hidden layer are fixed at unity while those from the hidden layer to the output layer are trained by minimizing the overall mean-square error between actual and desired output values. Each RBF output node has a corresponding state in a set of HMM word models which represent the words in the vocabulary. HMM word models are left-to-right with no skip states and have a one-state background noise model at either end. The background noise models are identical for all words.
Dynamically-Adaptive Winner-Take-All Networks
Unfortunately, convergence of normal WTA networks is extremely sensitive to the magnitudes of their weights, which must be hand-tuned and which generally onlyprovide the right amount of inhibition across a relatively small range of initial conditions. This paper presents Dynamjcally Adaptive Winner-Telke-All (DA WTA) netw rls, which use a regulatory unit to provide the competitive inhibition to the units in the network. The DAWTA regulatory unit dynamically adjusts its level of activation during competition to provide the right amount of inhibition to differentiate betweencompetitors and drive a single winner. This dynamic adaptation allows DAWTA networks to perform the winner-lake-all function for nearly any network size or initial condition.
Time-Warping Network: A Hybrid Framework for Speech Recognition
Levin, Esther, Pieraccini, Roberto, Bocchieri, Enrico
Such systems attempt to combine the best features of both models: the temporal structure of HMMs and the discriminative power of neural networks. In this work we define a time-warping (1W) neuron that extends the operation of the fonnal neuron of a back-propagation network by warping the input pattern to match it optimally to its weights. We show that a single-layer network of TW neurons is equivalent to a Gaussian density HMMbased recognitionsystem.
The Efficient Learning of Multiple Task Sequences
I present a modular network architecture and a learning algorithm based on incremental dynamic programming that allows a single learning agent to learn to solve multiple Markovian decision tasks (MDTs) with significant transfer of learning across the tasks. I consider a class of MDTs, called composite tasks, formed by temporally concatenating a number of simpler, elemental MDTs. The architecture is trained on a set of composite and elemental MDTs. The temporal structure of a composite task is assumed to be unknown and the architecture learns to produce a temporal decomposition. It is shown that under certain conditions the solution of a composite MDT can be constructed by computationally inexpensive modifications of the solutions of its constituent elemental MDTs. 1 INTRODUCTION Most applications of domain independent learning algorithms have focussed on learning single tasks. Building more sophisticated learning agents that operate in complex environments will require handling multiple tasks/goals (Singh, 1992). Research effort on the scaling problem has concentrated on discovering faster learning algorithms, and while that will certainly help, techniques that allow transfer of learning across tasks will be indispensable for building autonomous learning agents that have to learn to solve multiple tasks. In this paper I consider a learning agent that interacts with an external, finite-state, discrete-time, stochastic dynamical environment and faces multiple sequences of Markovian decision tasks (MDTs).
VISIT: A Neural Model of Covert Visual Attention
Visual attention is the ability to dynamically restrict processing to a subset of the visual field. Researchers have long argued that such a mechanism is necessary to efficiently perform many intermediate level visual tasks. This paper describes VISIT, a novel neural network model of visual attention.
Models Wanted: Must Fit Dimensions of Sleep and Dreaming
Hobson, J. Allan, Mamelak, Adam N., Sutton, Jeffrey P.
During waking and sleep, the brain and mind undergo a tightly linked and precisely specified set of changes in state. At the level of neurons, this process has been modeled by variations of Volterra-Lotka equations for cyclic fluctuations of brainstem cell populations. However, neural network models based upon rapidly developing knowledge ofthe specific population connectivities and their differential responses to drugs have not yet been developed. Furthermore, only the most preliminary attempts have been made to model across states. Some of our own attempts to link rapid eye movement (REM) sleep neurophysiology and dream cognition using neural network approaches are summarized in this paper.
HARMONET: A Neural Net for Harmonizing Chorales in the Style of J. S. Bach
Hild, Hermann, Feulner, Johannes, Menzel, Wolfram
The chord skeleton is obtained if eighth and sixteenth notes are viewed as omitable ornamentations. Furthermore, if the chords are conceived as harmonies with certain attributes such as "inversion" or "characteristic dissonances", the chorale is reducible to its harmonic skeleton, a thoroughbass-like representation (Figure 2).