Plotting

 Country


Fast, Large-Scale Transformation-Invariant Clustering

Neural Information Processing Systems

In previous work on "transformed mixtures of Gaussians" and "transformed hidden Markov models", we showed how the EM algorithm ina discrete latent variable model can be used to jointly normalize data (e.g., center images, pitch-normalize spectrograms) and learn a mixture model of the normalized data. The only input to the algorithm is the data, a list of possible transformations, and the number of clusters to find. The main criticism of this work was that the exhaustive computation of the posterior probabilities overtransformations would make scaling up to large feature vectors and large sets of transformations intractable. Here, we describe howa tremendous speedup is acheived through the use of a variational technique for decoupling transformations, and a fast Fourier transform method for computing posterior probabilities.


A Rotation and Translation Invariant Discrete Saliency Network

Neural Information Processing Systems

We describe a neural network which enhances and completes salient closed contours. Our work is different from all previous work in three important ways. First, like the input provided to V1 by LGN, the input toour computation is isotropic. That is, the input is composed of spots not edges. Second, our network computes a well defined function of the input based on a distribution of closed contours characterized by a random process. Third, even though our computation is implemented in a discrete network, its output is invariant to continuous rotations and translations of the input pattern.


A Natural Policy Gradient

Neural Information Processing Systems

These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton etal.


Intransitive Likelihood-Ratio Classifiers

Neural Information Processing Systems

In this work, we introduce an information-theoretic based correction term to the likelihood ratio classification method for multiple classes. Under certain conditions, the term is sufficient for optimally correcting the difference betweenthe true and estimated likelihood ratio, and we analyze this in the Gaussian case. We find that the new correction term significantly improvesthe classification results when tested on medium vocabulary speechrecognition tasks. Moreover, the addition of this term makes the class comparisons analogous to an intransitive game and we therefore use several tournament-like strategies to deal with this issue. We find that further small improvements are obtained by using an appropriate tournament.Lastly, we find that intransitivity appears to be a good measure of classification confidence.


Duality, Geometry, and Support Vector Regression

Neural Information Processing Systems

We develop an intuitive geometric framework for support vector regression (SVR). By examining when ɛ-tubes exist, we show that SVR can be regarded as a classification problem in the dual space. Hard and soft ɛ-tubes are constructed by separating the convex or reduced convex hulls respectively of the training data with the response variable shifted up and down by ɛ. A novel SVR model is proposed based on choosing the max-margin plane between the two shifted datasets.


Grouping and dimensionality reduction by locally linear embedding

Neural Information Processing Systems

Locally Linear Embedding (LLE) is an elegant nonlinear dimensionality-reduction technique recently introduced by Roweis and Saul [2]. It fails when the data is divided into separate groups. We study a variant of LLE that can simultaneously group the data and calculate local embedding of each group. An estimate for the upper bound on the intrinsic dimension of the data set is obtained automatically. 1 Introduction


On the Generalization Ability of On-Line Learning Algorithms

Neural Information Processing Systems

In this paper we show that online algorithms for classification and regression canbe naturally used to obtain hypotheses with good datadependent tailbounds on their risk. Our results are proven without requiring complicated concentration-of-measure arguments and they hold for arbitrary online learning algorithms. Furthermore, when applied to concrete online algorithms, our results yield tail bounds that in many cases are comparable or better than the best known bounds.



Sampling Techniques for Kernel Methods

Neural Information Processing Systems

We propose randomized techniques for speeding up Kernel Principal Component Analysis on three levels: sampling and quantization of the Gram matrix in training, randomized rounding in evaluating the kernel expansions, and random projections in evaluating the kernel itself. In all three cases, we give sharp bounds on the accuracy of the obtained approximations. Ratherintriguingly, all three techniques can be viewed as instantiations of the following idea: replace the kernel function by a "randomized kernel" which behaves like in expectation.


Unsupervised Learning of Human Motion Models

Neural Information Processing Systems

This paper presents an unsupervised learning algorithm that can derive the probabilistic dependence structure of parts of an object (a moving human bodyin our examples) automatically from unlabeled data. The distinguished partof this work is that it is based on unlabeled data, i.e., the training features include both useful foreground parts and background clutter and the correspondence between the parts and detected features are unknown. We use decomposable triangulated graphs to depict the probabilistic independence of parts, but the unsupervised technique is not limited to this type of graph. In the new approach, labeling of the data (part assignments) is taken as hidden variables and the EM algorithm isapplied. A greedy algorithm is developed to select parts and to search for the optimal structure based on the differential entropy of these variables. The success of our algorithm is demonstrated by applying it to generate models of human motion automatically from unlabeled real image sequences.