Not enough data to create a plot.
Try a different view from the menu above.
North America
An Actor/Critic Algorithm that is Equivalent to Q-Learning
Crites, Robert H., Barto, Andrew G.
We prove the convergence of an actor/critic algorithm that is equivalent toQ-Iearning by construction. Its equivalence is achieved by encoding Q-values within the policy and value function of the actor andcritic. The resultant actor/critic algorithm is novel in two ways: it updates the critic only when the most probable action is executed from any given state, and it rewards the actor using criteria thatdepend on the relative probability of the action that was executed.
Factorial Learning and the EM Algorithm
Many real world learning problems are best characterized by an interaction of multiple independent causes or factors. Discovering suchcausal structure from the data is the focus of this paper. Based on Zemel and Hinton's cooperative vector quantizer (CVQ) architecture, an unsupervised learning algorithm is derived from the Expectation-Maximization (EM) framework. Due to the combinatorial natureof the data generation process, the exact E-step is computationally intractable. Two alternative methods for computing theE-step are proposed: Gibbs sampling and mean-field approximation, and some promising empirical results are presented.
Phase-Space Learning
Tsung, Fu-Sheng, Cottrell, Garrison W.
Existing recurrent net learning algorithms are inadequate. We introduce theconceptual framework of viewing recurrent training as matching vector fields of dynamical systems in phase space. Phasespace reconstructiontechniques make the hidden states explicit, reducing temporal learning to a feed-forward problem. In short, we propose viewing iterated prediction [LF88] as the best way of training recurrent networks on deterministic signals. Using this framework, we can train multiple trajectories, insure their stability, anddesign arbitrary dynamical systems. 1 INTRODUCTION Existing general-purpose recurrent algorithms are capable of rich dynamical behavior. Unfortunately,straightforward applications of these algorithms to training fully-recurrent networks on complex temporal tasks have had much less success than their feedforward counterparts. For example, to train a recurrent network to oscillate like a sine wave (the "hydrogen atom" of recurrent learning), existing techniques such as Real Time Recurrent Learning (RTRL) [WZ89] perform suboptimally. Williams& Zipser trained a two-unit network with RTRL, with one teacher signal. One unit of the resulting network showed a distorted waveform, the other only half the desired amplitude.
Generalization in Reinforcement Learning: Safely Approximating the Value Function
Boyan, Justin A., Moore, Andrew W.
Reinforcement learning-the problem of getting an agent to learn to act from sparse, delayed rewards-has been advanced by techniques based on dynamic programming (DP). These algorithms compute a value function which gives, for each state, the minimum possiblelong-term cost commencing in that state. For the high-dimensional and continuous state spaces characteristic of real-world control tasks, a discrete representation ofthe value function is intractable; some form of generalization is required. A natural way to incorporate generalization into DP is to use a function approximator, rather than a lookup table, to represent the value function. This approach, which dates back to uses of Legendre polynomials in DP [Bellman et al., 19631, has recently worked well on several dynamic control problems [Mahadevan and Connell, 1990, Lin, 1993] and succeeded spectacularly on the game of backgammon [Tesauro, 1992, Boyan, 1992].
A Connectionist Technique for Accelerated Textual Input: Letting a Network Do the Typing
Each year people spend a huge amount oftime typing. The text people type typically contains a tremendous amount of redundancy due to predictable word usage patterns and the text's structure. This paper describes a neural network system call AutoTypist that monitors a person's typing and predicts what will be entered next. AutoTypist displays the most likely subsequent word to the typist, who can accept it with a single keystroke, instead of typing it in its entirety. The multi-layer perceptron at the heart of Auto'JYpist adapts its predictions of likely subsequent text to the user's word usage pattern, and to the characteristics of the text currently being typed. Increases in typing speed of 2-3% when typing English prose and 10-20% when typing C code have been demonstrated using the system, suggesting a potential time savings of more than 20 hours per user per year. In addition to increasing typing speed, AutoTypist reduces the number of keystrokes a user must type by a similar amount (2-3% for English, 10-20% for computer programs). This keystroke savings has the potential to significantly reduce the frequency and severity of repeated stress injuries caused by typing, which are the most common injury suffered in today's office environment.
Recurrent Networks: Second Order Properties and Pruning
Pedersen, Morten With, Hansen, Lars Kai
Second order properties of cost functions for recurrent networks are investigated. We analyze a layered fully recurrent architecture, the virtue of this architecture is that it features the conventional feedforward architecture as a special case. A detailed description of recursive computation of the full Hessian of the network cost function isprovided. We discuss the possibility of invoking simplifying approximations of the Hessian and show how weight decays iron the cost function and thereby greatly assist training. We present tentative pruningresults, using Hassibi et al.'s Optimal Brain Surgeon, demonstrating that recurrent networks can construct an efficient internal memory. 1 LEARNING IN RECURRENT NETWORKS Time series processing is an important application area for neural networks and numerous architectures have been suggested, see e.g.
Recognizing Handwritten Digits Using Mixtures of Linear Models
Hinton, Geoffrey E., Revow, Michael, Dayan, Peter
We construct a mixture of locally linear generative models of a collection ofpixel-based images of digits, and use them for recognition. Different models of a given digit are used to capture different styles of writing, and new images are classified by evaluating their log-likelihoods under each model. We use an EMbased algorithm in which the M-step is computationally straightforward principal components analysis (PCA). Incorporating tangent-plane information [12]about expected local deformations only requires adding tangent vectors into the sample covariance matrices for the PCA, and it demonstrably improves performance.
Direction Selectivity In Primary Visual Cortex Using Massive Intracortical Connections
Suarez, Humbert, Koch, Christof, Douglas, Rodney
Almost all models of orientation and direction selectivity in visual cortex are based on feedforward connection schemes, where geniculate inputprovides all excitation to both pyramidal and inhibitory neurons. The latter neurons then suppress the response of the former fornon-optimal stimuli. However, anatomical studies show that up to 90 % of the excitatory synaptic input onto any cortical cellis provided by other cortical cells. The massive excitatory feedback nature of cortical circuits is embedded in the canonical microcircuit of Douglas &. Martin (1991). We here investigate analytically andthrough biologically realistic simulations the functioning of a detailed model of this circuitry, operating in a hysteretic mode. In the model, weak geniculate input is dramatically amplified byintracortical excitation, while inhibition has a dual role: (i) to prevent the early geniculate-induced excitation in the null direction and(ii) to restrain excitation and ensure that the neurons fire only when the stimulus is in their receptive-field.
Plasticity-Mediated Competitive Learning
Schraudolph, Nicol N., Sejnowski, Terrence J.
Differentiation between the nodes of a competitive learning network isconventionally achieved through competition on the basis of neural activity. Simple inhibitory mechanisms are limited to sparse representations, while decorrelation and factorization schemes that support distributed representations are computationally unattractive.By letting neural plasticity mediate the competitive interactioninstead, we obtain diffuse, nonadaptive alternatives forfully distributed representations. We use this technique to Simplify and improve our binary information gain optimization algorithmfor feature extraction (Schraudolph and Sejnowski, 1993); the same approach could be used to improve other learning algorithms. 1 INTRODUCTION Unsupervised neural networks frequently employ sets of nodes or subnetworks with identical architecture and objective function. Some form of competitive interaction isthen needed for these nodes to differentiate and efficiently complement each other in their task.
Using Voice Transformations to Create Additional Training Talkers for Word Spotting
Chang, Eric I., Lippmann, Richard P.
Lack of training data has always been a constraint in training speech recognizers. This research presentsa voice transformation technique which increases the variety among training talkers. The resulting more varied training set provided up to 2.9 percentage points of improvement in the figure of merit (average detection rate) of a high performance word spotter. This improvement is similar to the increase in performance provided by doubling the amount of training data (Carlson, 1994). This technique can also be applied to other speech recognition systems such as continuous speech recognition, talker identification, and isolated speech recognition.