Plotting

Efficient Pattern Recognition Using a New Transformation Distance

Neural Information Processing Systems

Memory-based classification algorithms such as radial basis functions orK-nearest neighbors typically rely on simple distances (Euclidean, dotproduct ...), which are not particularly meaningful on pattern vectors. More complex, better suited distance measures are often expensive and rather ad-hoc (elastic matching, deformable templates). We propose a new distance measure which (a) can be made locally invariant to any set of transformations of the input and (b) can be computed efficiently. We tested the method on large handwritten character databases provided by the Post Office and the NIST. Using invariances with respect to translation, rotation, scaling,shearing and line thickness, the method consistently outperformed all other systems tested on the same databases.


Automatic Learning Rate Maximization by On-Line Estimation of the Hessian's Eigenvectors

Neural Information Processing Systems

Inst., 19600 NW vonNeumann Dr, Beaverton, OR 97006 Abstract We propose a very simple, and well principled way of computing the optimal step size in gradient descent algorithms. The online version is very efficient computationally, and is applicable to large backpropagation networks trained on large data sets. The main ingredient is a technique for estimating the principal eigenvalue(s) and eigenvector(s) of the objective function's second derivative matrix (Hessian),which does not require to even calculate the Hessian. Severalother applications of this technique are proposed for speeding up learning, or for eliminating useless parameters. 1 INTRODUCTION Choosing the appropriate learning rate, or step size, in a gradient descent procedure such as backpropagation, is simultaneously one of the most crucial and expertintensive partof neural-network learning. We propose a method for computing the best step size which is both well-principled, simple, very cheap computationally, and, most of all, applicable to online training with large networks and data sets.


Computing with Almost Optimal Size Neural Networks

Neural Information Processing Systems

Artificial neural networks are comprised of an interconnected collection of certain nonlinear devices; examples of commonly used devices include linear threshold elements, sigmoidal elements and radial-basis elements. We employ results from harmonic analysis and the theory of rational approximation toobtain almost tight lower bounds on the size (i.e.


Using Prior Knowledge in a NNPDA to Learn Context-Free Languages

Neural Information Processing Systems

Language inference and automata induction using recurrent neural networks has gained considerable interest in the recent years. Nevertheless, success of these models hasbeen mostly limited to regular languages. Additional information in form of a priori knowledge has proved important and at times necessary for learning complex languages(Abu-Mostafa 1990; AI-Mashouq and Reed, 1991; Omlin and Giles, 1992; Towell, 1990). They have demonstrated that partial information incorporated in a connectionist model guides the learning process through constraints for efficient learning and better generalization. 'Ve have previously shown that the NNPDA model can learn Deterministic Context 65 66 Das, Giles, and Sun Output


Input Reconstruction Reliability Estimation

Neural Information Processing Systems

This paper describes a technique called Input Reconstruction Reliability Estimation (IRRE) for determining the response reliability of a restricted class of multi-layer perceptrons (MLPs). The technique uses a network's ability to accurately encode the input pattern in its internal representation as a measure of its reliability. The more accurately a network is able to reconstruct the input pattern from its internal representation, the more reliable the network is considered to be. IRRE is provides a good estimate of the reliability of MLPs trained for autonomous driving. Results are presented in which the reliability estimates provided by IRRE are used to select between networks trained for different driving situations. 1 Introduction In many real world domains it is important to know the reliability of a network's response since a single network cannot be expected to accurately handle all the possible inputs.


A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition

Neural Information Processing Systems

Untill recently, state-of-the-art, large-vocabulary, continuous speech recognition (CSR) has employed Hidden Markov Modeling (HMM) to model speech sounds. In an attempt to improve over HMM we developed a hybrid system that integrates HMM technology with neural We present the concept of a "Segmental Neural Net"networks.


Feudal Reinforcement Learning

Neural Information Processing Systems

One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-Iearning managerial hierarchy how to set tasks to their submanagersin which high level managers learn how to satisfy them. Sub-managerswho, in turn, learn understand their managers' commands. Theyneed not initially simply learn to maximise their reinforcement in the context of the current command. We illustrate the system using a simple maze task .. As the system learns how to get around, satisfying commands at the multiple than standard, flat, Q-Iearninglevels, it explores more efficiently and builds a more comprehensive map. 1 INTRODUCTION Straightforward reinforcement learning has been quite successful at some relatively thecomplex tasks like playing backgammon (Tesauro, 1992).



Robot-Building Lab and Contest at the 1993 National AI Conference

AI Magazine

A robot-building lab and contest was held at the Eleventh National Conference on Artificial Intelligence. Teams of three worked day and night for 72 hours to build tabletop autonomous robots of legos, a small microcontroller board, and sensors. The robots then competed head to head in two events. This article contains my personal recollections of the lab and contest.


Goal-Driven Learning: Fundamental Issues: A Symposium Report

AI Magazine

In AI, psychology, and education, a growing body of research supports the view that learning is a goal-directed process. Psychological experiments show that people with varying goals process information differently, studies in education show that goals have a strong effect on what students learn, and functional arguments in machine learning support the necessity of goal-based focusing of learner effort. At the Fourteenth Annual Conference of the Cognitive Science Society, a symposium brought together researchers in AI, psychology, and education to discuss goal-driven learning. This article presents the fundamental points illuminated at the symposium, placing them in the context of open questions and current research directions in goal-driven learning.