Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
Context-Dependent Classes in a Hybrid Recurrent Network-HMM Speech Recognition System
Kershaw, Dan J., Robinson, Anthony J., Hochberg, Mike
A method for incorporating context-dependent phone classes in a connectionist-HMM hybrid speech recognition system is introduced. Amodular approach is adopted, where single-layer networks discriminate between different context classes given the phone class and the acoustic data. The context networks are combined with a context-independent (CI) network to generate context-dependent (CD) phone probability estimates. Experiments show an average reduction in word error rate of 16% and 13% from the CI system on ARPA 5,000 word and SQALE 20,000 word tasks respectively. Due to improved modelling, the decoding speed of the CD system is more than twice as fast as the CI system.
Generating Accurate and Diverse Members of a Neural-Network Ensemble
Opitz, David W., Shavlik, Jude W.
In particular, combining separately trained neural networks (commonly referred to as a neural-network ensemble) has been demonstrated to be particularly successful (Alpaydin, 1993; Drucker et al., 1994; Hansen and Salamon, 1990; Hashem et al., 1994; Krogh and Vedelsby, 1995; Maclin and Shavlik, 1995; Perrone, 1992). Both theoretical (Hansen and Salamon, 1990;Krogh and Vedelsby, 1995) and empirical (Hashem et al., 1994; 536 D.W. OPITZ, J. W. SHAVLIK Maclin and Shavlik, 1995) work has shown that a good ensemble is one where the individual networks are both accurate and make their errors on different parts of the input space; however, most previous work has either focussed on combining the output of multiple trained networks or only indirectly addressed how we should generate a good set of networks.
Is Learning The n-th Thing Any Easier Than Learning The First?
This paper investigates learning in a lifelong context. Lifelong learning addresses situations in which a learner faces a whole stream of learning tasks.Such scenarios provide the opportunity to transfer knowledge across multiple learning tasks, in order to generalize more accurately from less training data. In this paper, several different approaches to lifelong learning are described, and applied in an object recognition domain. It is shown that across the board, lifelong learning approaches generalize consistently more accurately from less training data, by their ability to transfer knowledge across learning tasks. 1 Introduction Supervised learning is concerned with approximating an unknown function based on examples. Virtuallyall current approaches to supervised learning assume that one is given a set of input-output examples, denoted by X, which characterize an unknown function, denoted by f.
Reinforcement Learning by Probability Matching
Sabes, Philip N., Jordan, Michael I.
Department of Brain and Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Abstract We present a new algorithm for associative reinforcement learning. Thealgorithm is based upon the idea of matching a network's output probability with a probability distribution derived from the environment's reward signal. This Probability Matching algorithm is shown to perform faster and be less susceptible to local minima than previously existing algorithms. We use Probability Matching totrain mixture of experts networks, an architecture for which other reinforcement learning rules fail to converge reliably on even simple problems. This architecture is particularly well suited for our algorithm as it can compute arbitrarily complex functions yet calculation of the output probability is simple. 1 INTRODUCTION The problem of learning associative networks from scalar reinforcement signals is notoriously difficult.
Discriminant Adaptive Nearest Neighbor Classification and Regression
Hastie, Trevor, Tibshirani, Robert
Nearest neighbor classification expects the class conditional probabilities tobe locally constant, and suffers from bias in high dimensions Wepropose a locally adaptive form of nearest neighbor classification to try to finesse this curse of dimensionality. We use a local linear discriminant analysis to estimate an effective metric forcomputing neighborhoods. We determine the local decision boundaries from centroid information, and then shrink neighborhoods indirections orthogonal to these local decision boundaries, and elongate them parallel to the boundaries. Thereafter, any neighborhood-based classifier can be employed, using the modified neighborhoods. We also propose a method for global dimension reduction, that combines local dimension information.
Neural Control for Nonlinear Dynamic Systems
Yu, Ssu-Hsin, Annaswamy, Anuradha M.
A neural network based approach is presented for controlling two distinct types of nonlinear systems. The first corresponds to nonlinear systems with parametric uncertainties where the parameters occur nonlinearly. The second corresponds to systems for which stabilizing control structures cannotbe determined. The proposed neural controllers are shown to result in closed-loop system stability under certain conditions.
Stock Selection via Nonlinear Multi-Factor Models
This paper discusses the use of multilayer feedforward neural networks forpredicting a stock's excess return based on its exposure to various technical and fundamental factors. To demonstrate the effectiveness of the approach a hedged portfolio which consists of equally capitalized long and short positions is constructed and its historical returns are benchmarked against T-bill returns and the S&P500 index. 1 Introduction
Harmony Networks Do Not Work
Harmony networks have been proposed as a means by which connectionist modelscan perform symbolic computation. Indeed, proponents claim that a harmony network can be built that constructs parse trees for strings in a context free language. This paper shows that harmony networks do not work in the following sense: they construct many outputs that are not valid parse trees. In order to show that the notion of systematicity is compatible with connectionism, Paul Smolensky, Geraldine Legendre and Yoshiro Miyata (Smolensky, Legendre, and Miyata 1992; Smolensky 1993; Smolensky, Legendre, and Miyata 1994) proposed amechanism, "Harmony Theory," by which connectionist models purportedly perform structure sensitive operations without implementing classical algorithms. Harmony theory describes a "harmony network" which, in the course of reaching a stable equilibrium, apparently computes parse trees that are valid according to the rules of a particular context-free grammar.
Competence Acquisition in an Autonomous Mobile Robot using Hardware Neural Techniques
Jackson, Geoffrey B., Murray, Alan F.
In this paper we examine the practical use of hardware neural networks in an autonomous mobile robot. We have developed a hardware neural system based around a custom VLSI chip, EP SILON III, designed specifically for embedded hardware neural applications. We present here a demonstration application of an autonomous mobile robot that highlights the flexibility of this system.