Country
Learning with ensembles: How overfitting can be useful
We study the characteristics of learning with ensembles. Solving exactly the simple model of an ensemble of linear students, we find surprisingly rich behaviour. For learning in large ensembles, it is advantageous to use under-regularized students, which actually over-fit the training data. Globally optimal performance can be obtained by choosing the training set sizes of the students appropriately. For smaller ensembles, optimization of the ensemble weights can yield significant improvements in ensemble generalization performance, in particular if the individual students are subject to noise in the training process. Choosing students with a wide range of regularization parameters makes this improvement robust against changes in the unknown level of noise in the training data. 1 INTRODUCTION An ensemble is a collection of a (finite) number of neural networks or other types of predictors that are trained for the same task.
Rapid Quality Estimation of Neural Network Input Representations
Cherkauer, Kevin J., Shavlik, Jude W.
However, ANNs are usually costly to train, preventing one from trying many different representations. In this paper, we address this problem by introducing and evaluating three new measures for quickly estimating ANN input representation quality. Two of these, called [DBleaves and Min (leaves), consistently outperform Rendell and Ragavan's (1993) blurring measure in accurately ranking different input representations for ANN learning on three difficult, real-world datasets.
Quadratic-Type Lyapunov Functions for Competitive Neural Networks with Different Time-Scales
The dynamics of complex neural networks modelling the selforganization process in cortical maps must include the aspects of long and short-term memory. The behaviour of the network is such characterized by an equation of neural activity as a fast phenomenon and an equation of synaptic modification as a slow part of the neural system. We present a quadratic-type Lyapunov function for the flow of a competitive neural system with fast and slow dynamic variables. We also show the consequences of the stability analysis on the neural net parameters. 1 INTRODUCTION This paper investigates a special class of laterally inhibited neural networks. In particular, we have examined the dynamics of a restricted class of laterally inhibited neural networks from a rigorous analytic standpoint.
Statistical Theory of Overtraining - Is Cross-Validation Asymptotically Effective?
Amari, Shun-ichi, Murata, Noboru, Mรผller, Klaus-Robert, Finke, Michael, Yang, Howard Hua
A statistical theory for overtraining is proposed. The analysis treats realizable stochastic neural networks, trained with Kullback Leibler loss in the asymptotic case. It is shown that the asymptotic gain in the generalization error is small if we perform early stopping, even if we have access to the optimal stopping time. Considering cross-validation stopping we answer the question: In what ratio the examples should be divided into training and testing sets in order to obtain the optimum performance. In the non-asymptotic region cross-validated early stopping always decreases the generalization error. Our large scale simulations done on a CM5 are in nice agreement with our analytical findings.
Beating a Defender in Robotic Soccer: Memory-Based Learning of a Continuous Function
Stone, Peter, Veloso, Manuela M.
Our research works towards this broad goal from a Machine Learning perspective. We are particularly interested in investigating how an intelligent agent can choose an action in an adversarial environment. We assume that the agent has a specific goal to achieve. We conduct this investigation in a framework where teams of agents compete in a game of robotic soccer. The real system of model cars remotely controlled from off-board computers is under development.
A Predictive Switching Model of Cerebellar Movement Control
Barto, Andrew G., Houk, James C.
The existence of significant delays in sensorimotor feedback pathways has led several researchers to suggest that the cerebellum might function as a forward model of the motor plant in order to predict the sensory consequences of motor commands before actual feedback is available; e.g., (Ito, 1984; Keeler, 1990; Miall et ai., 1993). While we agree that there are many potential roles for forward models in motor control systems, as discussed, e.g., in (Wolpert et al., 1995), we present a hypothesis about how the cerebellum could participate in regulating movement in the presence of significant feedback delays without resorting to a forward model. We show how a very simplified version of the adjustable pattern generator (APG) model being developed by Houk and colleagues (Berthier et al., 1993; Houk et al., 1995) can learn to control endpoint positioning of a nonlinear spring-mass system with significant delays in both afferent and efferent pathways. Although much simpler than a multilink dynamic arm, control of this spring-mass system involves some of the challenges critical in the control of a more realistic motor system and serves to illustrate the principles we propose. Preliminary results appear in (Buckingham et al., 1995).
Primitive Manipulation Learning with Connectionism
Infants' manipulative exploratory behavior within the environment is a vehicle of cognitive stimulation[McCall 1974]. During this time, infants practice and perfect sensorimotor patterns that become behavioral modules which will be seriated and imbedded in more complex actions. This paper explores the development of such primitive learning systems using an embodied lightweight hand which will be used for a humanoid being developed at the MIT Artificial Intelligence Laboratory[Brooks and Stein 1993]. Primitive grasping procedures are learned from sensory inputs using a connectionist reinforcement algorithm while two submodules preprocess sensory data to recognize the hardness of objects and detect shear using competitive learning and back-propagation algorithm strategies, respectively. This system is not only consistent and quick during the initial learning stage, but also adaptable to new situations after training is completed.
Human Face Detection in Visual Scenes
Rowley, Henry A., Baluja, Shumeet, Kanade, Takeo
We present a neural network-based face detection system. A retinally connected neural network examines small windows of an image, and decides whether each window contains a face. The system arbitrates between multiple networks to improve performance over a single network. We use a bootstrap algorithm for training, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting non-face training examples, which must be chosen to span the entire space of non-face images.
From Isolation to Cooperation: An Alternative View of a System of Experts
Schaal, Stefan, Atkeson, Christopher G.
We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to find a local distance metric by adjusting the size and shape of the receptive field in which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input dimensions. We derive asymptotic results for our method. In a variety of simulations the properties of the algorithm are demonstrated with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning.