Goto

Collaborating Authors

 Technology


Robust, Efficient, Globally-Optimized Reinforcement Learning with the Parti-Game Algorithm

Neural Information Processing Systems

The former represents the number of cells that have to be traveled through to get to the goal cell and the latter represents the belief that there is no reliable way of getting from that cell to the goal. Cells with a cost of infinity are called losing cells while others are called winning ones.


Source Separation as a By-Product of Regularization

Neural Information Processing Systems

This paper reveals a previously ignored connection between two important fields: regularization and independent component analysis (ICA).We show that at least one representative of a broad class of algorithms (regularizers that reduce network complexity) extracts independent features as a byproduct. This algorithm is Flat Minimum Search (FMS), a recent general method for finding low-complexity networks with high generalization capability. FMS works by minimizing both training error and required weight precision. Accordingto our theoretical analysis the hidden layer of an FMStrained autoassociator attempts at coding each input by a sparse code with as few simple features as possible. In experiments themethod extracts optimal codes for difficult versions of the "noisy bars" benchmark problem by separating the underlying sources, whereas ICA and PCA fail.


Regularizing AdaBoost

Neural Information Processing Systems

We will also introduce a regularization strategy(analogous to weight decay) into boosting. This strategy uses slack variables to achieve a soft margin (section 4). Numerical experiments show the validity of our regularization approach in section 5 and finally a brief conclusion is given. 2 AdaBoost Algorithm Let {ht(x): t 1, ...,T} be an ensemble of T hypotheses defined on input vector x and e




Learning Mixture Hierarchies

Neural Information Processing Systems

The hierarchical representation of data has various applications in domains suchas data mining, machine vision, or information retrieval. In this paper we introduce an extension of the Expectation-Maximization (EM) algorithm that learns mixture hierarchies in a computationally efficient manner.Efficiency is achieved by progressing in a bottom-up fashion, i.e. by clustering the mixture components of a given level in the hierarchy to obtain those of the level above. This clustering requires only knowledge of the mixture parameters, there being no need to resort to intermediate samples.


Fast Neural Network Emulation of Dynamical Systems for Computer Animation

Neural Information Processing Systems

Computer animation through the numerical simulation of physics-based graphics models offers unsurpassed realism, but it can be computationally demanding.This paper demonstrates the possibility of replacing the numerical simulation of nontrivial dynamic models with a dramatically more efficient "NeuroAnimator" that exploits neural networks. NeuroAnimators areautomatically trained off-line to emulate physical dynamics through the observation of physics-based models in action. Depending onthe model, its neural network emulator can yield physically realistic animation one or two orders of magnitude faster than conventional numericalsimulation. We demonstrate NeuroAnimators for a variety of physics-based models. 1 Introduction Animation based on physical principles has been an influential trend in computer graphics for over a decade (see, e.g., [1, 2, 3]). In conjunction with suitable control and constraint mechanisms, physical models also facilitate the production of copious quantities of realistic animationin a highly automated fashion.



Bayesian Modeling of Human Concept Learning

Neural Information Processing Systems

I consider the problem of learning concepts from small numbers of positive examples,a feat which humans perform routinely but which computers arerarely capable of. Bridging machine learning and cognitive science perspectives, I present both theoretical analysis and an empirical study with human subjects for the simple task oflearning concepts corresponding toaxis-aligned rectangles in a multidimensional feature space. Existing learning models, when applied to this task, cannot explain how subjects generalize from only a few examples of the concept. I propose a principled Bayesian model based on the assumption that the examples are a random sample from the concept to be learned. The model gives precise fits to human behavior on this simple task and provides qualitati ve insights into more complex, realistic cases of concept learning.


Spike-Based Compared to Rate-Based Hebbian Learning

Neural Information Processing Systems

For example, a'Hebbian' (Hebb 1949) learning rule which is driven by the correlations between presynaptic and postsynaptic rates may be used to generate neuronal receptive fields (e.g., Linsker 1986, MacKay and Miller 1990, Wimbauer et al. 1997) with properties similar to those of real neurons. A rate-based description, however, neglects effects which are due to the pulse structure of neuronal signals.