Country
The Method of Quantum Clustering
We propose a novel clustering method that is an extension of ideas inherent toscale-space clustering and support-vector clustering. Like the latter, itassociates every data point with a vector in Hilbert space, and like the former it puts emphasis on their total sum, that is equal to the scalespace probabilityfunction. The novelty of our approach is the study of an operator in Hilbert space, represented by the Schrödinger equation of which the probability function is a solution. This Schrödinger equation contains a potential function that can be derived analytically from the probability function.
Escaping the Convex Hull with Extrapolated Vector Machines
Maximum margin classifiers such as Support Vector Machines (SVMs) critically depends upon the convex hulls of the training samples of each class, as they implicitly search for the minimum distance between the convex hulls. We propose Extrapolated Vector Machines(XVMs) which rely on extrapolations outside these convex hulls.
Estimating Car Insurance Premia: a Case Study in High-Dimensional Data Inference
Chapados, Nicolas, Bengio, Yoshua, Vincent, Pascal, Ghosn, Joumana, Dugas, Charles, Takeuchi, Ichiro, Meng, Linyan
This conditional expected claim amount is called the pure premium and it is the basis of the gross premium charged to the insured. This expected value is conditionned on information available about the insured and about the contract, which we call input profile here. This regression problem is difficult for several reasons: large number of examples, -large number variables (most of which are discrete and multi-valued), non-stationarity of the distribution, and a conditional distribution of the dependent variable which is very different from those usually encountered in typical applications .of
A Variational Approach to Learning Curves
Malzahn, Dörthe, Opper, Manfred
We combine the replica approach from statistical physics with a variational approachto analyze learning curves analytically. We apply the method to Gaussian process regression. As a main result we derive approximative relationsbetween empirical error measures, the generalization error and the posterior variance.
Risk Sensitive Particle Filters
Thrun, Sebastian, Langford, John, Verma, Vandi
We propose a new particle filter that incorporates a model of costs when generating particles. The approach is motivated by the observation that the costs of accidentally not tracking hypotheses might be significant in some areas of state space, and next to irrelevant in others. By incorporating acost model into particle filtering, states that are more critical to the system performance are more likely to be tracked. Automatic calculation of the cost model is implemented using an MDP value function calculation thatestimates the value of tracking a particular state. Experiments in two mobile robot domains illustrate the appropriateness of the approach.
Active Learning in the Drug Discovery Process
Warmuth, Manfred K., Rätsch, Gunnar, Mathieson, Michael, Liao, Jun, Lemmen, Christian
We investigate the following data mining problem from Computational Chemistry: From a large data set of compounds, find those that bind to a target molecule in as few iterations of biological testing as possible. In each iteration a comparatively small batch of compounds is screened for binding to the target. We apply active learning techniques for selecting the successive batches. One selection strategy picks unlabeled examples closest to the maximum margin hyperplane. Another produces many weight vectors by running perceptrons over multiple permutations of the data.
Tempo tracking and rhythm quantization by sequential Monte Carlo
Cemgil, Ali Taylan, Kappen, Bert
The structure of the proposed model is equivalent to a switching state space model. We formulate twowell known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering andmaximum a posteriori (MAP) state estimation tasks. The inferences are carried out using sequential Monte Carlo integration (particlefiltering) techniques. Music notation can be viewed as a list of the pitch levels and corresponding timestamps. Ideally, one would like to recover a score directly frOID: sound.
Prodding the ROC Curve: Constrained Optimization of Classifier Performance
Mozer, Michael C., Dodier, Robert, Colagrosso, Michael D., Guerra-Salcedo, Cesar, Wolniewicz, Richard
When designing a two-alternative classifier, one ordinarily aims to maximize the classifier's ability to discriminate between members of the two classes. We describe a situation in a real-world business application of machine-learning prediction in which an additional constraint is placed on the nature of the solution: thatthe classifier achieve a specified correct acceptance or correct rejection rate (i.e., that it achieve a fixed accuracy on members of one class or the other). Our domain is predicting churn in the telecommunications industry. Churn refers to customers who switch from one service provider to another. We propose fouralgorithms for training a classifier subject to this domain constraint, and present results showing that each algorithm yields a reliable improvement in performance.
Product Analysis: Learning to Model Observations as Products of Hidden Variables
Frey, Brendan J., Kannan, Anitha, Jojic, Nebojsa
Factor analysis and principal components analysis can be used to model linear relationships between observed variables and linearly map high-dimensional data to a lower-dimensional hidden space. In factor analysis, the observations are modeled as a linear combination ofnormally distributed hidden variables. We describe a nonlinear generalization of factor analysis, called "product analysis", thatmodels the observed variables as a linear combination of products of normally distributed hidden variables. Just as factor analysiscan be viewed as unsupervised linear regression on unobserved, normally distributed hidden variables, product analysis canbe viewed as unsupervised linear regression on products of unobserved, normally distributed hidden variables. The mapping betweenthe data and the hidden space is nonlinear, so we use an approximate variational technique for inference and learning.