Technology
Regularizing AdaBoost
Rätsch, Gunnar, Onoda, Takashi, Müller, Klaus R.
We will also introduce a regularization strategy(analogous to weight decay) into boosting. This strategy uses slack variables to achieve a soft margin (section 4). Numerical experiments show the validity of our regularization approach in section 5 and finally a brief conclusion is given. 2 AdaBoost Algorithm Let {ht(x): t 1, ...,T} be an ensemble of T hypotheses defined on input vector x and e
Learning Mixture Hierarchies
Vasconcelos, Nuno, Lippman, Andrew
The hierarchical representation of data has various applications in domains suchas data mining, machine vision, or information retrieval. In this paper we introduce an extension of the Expectation-Maximization (EM) algorithm that learns mixture hierarchies in a computationally efficient manner.Efficiency is achieved by progressing in a bottom-up fashion, i.e. by clustering the mixture components of a given level in the hierarchy to obtain those of the level above. This clustering requires only knowledge of the mixture parameters, there being no need to resort to intermediate samples.
Fast Neural Network Emulation of Dynamical Systems for Computer Animation
Grzeszczuk, Radek, Terzopoulos, Demetri, Hinton, Geoffrey E.
Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computationally demanding.This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. NeuroAnimators areautomatically trained off-line to emulate physical dynamics through the observation of physics-based models in action. Depending onthe model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conventional numericalsimulation. We demonstrate NeuroAnimators for a variety of physics-based models. 1 Introduction Animation based on physical principles has been an influential trend in computer graphics for over a decade (see, e.g., [1, 2, 3]). In conjunction with suitable control and constraint mechanisms, physical models also facilitate the production of copious quantities of realistic animationin a highly automated fashion.
Bayesian Modeling of Human Concept Learning
I consider the problem of learning concepts from small numbers of positive examples,a feat which humans perform routinely but which computers arerarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding toaxis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitati ve insights into more complex, realistic cases of concept learning.
Spike-Based Compared to Rate-Based Hebbian Learning
Kempter, Richard, Gerstner, Wulfram, Hemmen, J. Leo van
For example, a'Hebbian' (Hebb 1949) learning rule which is driven by the correlations between presynaptic and postsynaptic rates may be used to generate neuronal receptive fields (e.g., Linsker 1986, MacKay and Miller 1990, Wimbauer et al. 1997) with properties similar to those of real neurons. A rate-based description, however, neglects effects which are due to the pulse structure of neuronal signals.
Orientation, Scale, and Discontinuity as Emergent Properties of Illusory Contour Shape
Thornber, Karvel K., Williams, Lance R.
A recent neural model of illusory contour formation is based on a distribution of natural shapes traced by particles moving with constant speed in directions given by Brownian motions. The input to that model consists of pairs of position and direction constraints and the output consists of the distribution of contours joining all such pairs. In general, these contours will not be closed and their distribution will not be scale-invariant. In this paper, we show how to compute a scale-invariant distribution of closed contours given position constraints alone and use this result to explain a well known illusory contour effect. 1 INTRODUCTION It has been proposed by Mumford[3] that the distribution of illusory contour shapes can be modeled by particles travelling with constant speed in directions given by Brownian motions. More recently, Williams and Jacobs[7, 8] introduced the notion of a stochastic completion field, the distribution of particle trajectories joining pairs of position and direction constraints, and showed how it could be computed in a local parallel network.
Direct Optimization of Margins Improves Generalization in Combined Classifiers
Mason, Llew, Bartlett, Peter L., Baxter, Jonathan
The dark curve is AdaBoost, the light curve is DOOM. DOOM sacrifices significant training error forimproved test error (horizontal markson margin 0 line)_ 1 Introduction Many learning algorithms for pattern classification minimize some cost function of the training data, with the aim of minimizing error (the probability of misclassifying an example). One example of such a cost function is simply the classifier's error on the training data.