Goto

Collaborating Authors

 Country


Proximity Graphs for Clustering and Manifold Learning

Neural Information Processing Systems

Many machine learning algorithms for clustering or dimensionality reduction takeas input a cloud of points in Euclidean space, and construct a graph with the input data points as vertices. This graph is then partitioned (clustering)or used to redefine metric information (dimensionality reduction). There has been much recent work on new methods for graph-based clustering and dimensionality reduction, but not much on constructing the graph itself. Graphs typically used include the fullyconnected graph,a local fixed-grid graph (for image segmentation) or a nearest-neighbor graph. We suggest that the graph should adapt locally to the structure of the data. This can be achieved by a graph ensemble that combines multiple minimum spanning trees, each fit to a perturbed version of the data set. We show that such a graph ensemble usually produces abetter representation of the data manifold than standard methods; and that it provides robustness to a subsequent clustering or dimensionality reductionalgorithm based on the graph.


Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

Neural Information Processing Systems

This paper explores the computational consequences of simultaneous intrinsic andsynaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron's firing rate distribution. Thegoal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron's activity level. In conjunction with Hebbian learning at the neuron's synapses, the neuron is shown to discover sparse directions in the input.


Efficient Out-of-Sample Extension of Dominant-Set Clusters

Neural Information Processing Systems

Dominant sets are a new graph-theoretic concept that has proven to be relevant in pairwise data clustering problems, such as image segmentation. Theygeneralize the notion of a maximal clique to edgeweighted graphs and have intriguing, nontrivial connections to continuous quadratic optimization and spectral-based grouping. We address the problem of grouping out-of-sample examples after the clustering process has taken place. This may serve either to drastically reduce the computational burdenassociated to the processing of very large data sets, or to efficiently deal with dynamic situations whereby data sets need to be updated continually. We show that the very notion of a dominant set offers asimple and efficient way of doing this. Numerical experiments on various grouping problems show the effectiveness of the approach.


Result Analysis of the NIPS 2003 Feature Selection Challenge

Neural Information Processing Systems

The NIPS 2003 workshops included a feature selection competition organizedby the authors. We provided participants with five datasets from different application domains and called for classification resultsusing a minimal number of features. The competition took place over a period of 13 weeks and attracted 78 research groups. Participants were asked to make online submissions on the validation and test sets, with performance on the validation set being presented immediately to the participant and performance on the test set presented to the participants at the workshop. In total 1863 entries were made on the validation sets during the development period and 135 entries on all test sets for the final competition. The winners used a combination of Bayesian neural networkswith ARD priors and Dirichlet diffusion trees. Other top entries used a variety of methods for feature selection, which combined filters and/or wrapper or embedded methods using Random Forests,kernel methods, or neural networks as a classification engine. The results of the benchmark (including the predictions made by the participants and the features they selected) and the scoring software are publicly available. The benchmark is available at www.nipsfsc.ecs.soton.ac.uk for post-challenge submissions to stimulate further research.



Non-Local Manifold Tangent Learning

Neural Information Processing Systems

We claim and present arguments to the effect that a large class of manifold learningalgorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation suggests toexplore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize veryfar from training data (on learning handwritten character image rotations), where a local nonparametric method fails.


Unsupervised Variational Bayesian Learning of Nonlinear Models

Neural Information Processing Systems

In this paper we present a framework for using multi-layer perceptron (MLP)networks in nonlinear generative models trained by variational Bayesian learning. The nonlinearity is handled by linearizing it using a Gauss-Hermite quadrature at the hidden neurons. Thisyields an accurate approximation for cases of large posterior variance.The method can be used to derive nonlinear counterparts forlinear algorithms such as factor analysis, independent component/factor analysis and state-space models. This is demonstrated witha nonlinear factor analysis experiment in which even 20 sources can be estimated from a real world speech data set.


Matrix Exponential Gradient Updates for On-line Learning and Bregman Projection

Neural Information Processing Systems

We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Ratherthan treating the most general case, we focus on two key applications that exemplify our methods: Online learning with a simple square loss and finding a symmetric positive definite matrix subject to symmetric linear constraints. The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials topreserve positive definiteness. Most importantly, we show how the analysis of each algorithm generalizes to the non-diagonal case. We apply both new algorithms, called the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel matrix from distance measurements.


Class-size Independent Generalization Analsysis of Some Discriminative Multi-Category Classification

Neural Information Processing Systems

We consider the problem of deriving class-size independent generalization boundsfor some regularized discriminative multi-category classification methods.In particular, we obtain an expected generalization bound for a standard formulation of multi-category support vector machines. Basedon the theoretical result, we argue that the formulation over-penalizes misclassification error, which in theory may lead to poor generalization performance. A remedy, based on a generalization of multi-category logistic regression (conditional maximum entropy), is then proposed, and its theoretical properties are examined.


Semi-supervised Learning with Penalized Probabilistic Clustering

Neural Information Processing Systems

While clustering is usually an unsupervised operation, there are circumstances inwhich we believe (with varying degrees of certainty) that items A and B should be assigned to the same cluster, while items A and C should not. We would like such pairwise relations to influence cluster assignments of out-of-sample data in a manner consistent with the prior knowledge expressed in the training set. Our starting point is probabilistic clusteringbased on Gaussian mixture models (GMM) of the data distribution. We express clustering preferences in the prior distribution over assignments of data points to clusters. This prior penalizes cluster assignments according to the degree with which they violate the preferences.