Plotting

 Smola, Alex J.


Tighter Bounds for Structured Estimation

Neural Information Processing Systems

Large-margin structured estimation methods work by minimizing a convex upper bound of loss functions. While they allow for efficient optimization algorithms, these convex formulations are not tight and sacrifice the ability to accurately model the true loss. We present tighter non-convex bounds based on generalizing the notion of a ramp loss from binary classification to structured estimation. We show that a small modification of existing optimization algorithms suffices to solve this modified problem. On structured prediction tasks such as protein sequence alignment and web page ranking, our algorithm leads to improved accuracy.


Distribution Matching for Transduction

Neural Information Processing Systems

Many transductive inference algorithms assume that distributions over training and test estimates should be related, e.g. by providing a large margin of separation on both sets. We use this idea to design a transduction algorithm which can be used without modification for classification, regression, and structured estimation. At its heart we exploit the fact that for a good learner the distributions over the outputs on training and test sets should match. This is a classical two-sample problem which can be solved efficiently in its most general form by using distance measures in Hilbert Space. It turns out that a number of existing heuristics can be viewed as special cases of our approach.


COFI RANK - Maximum Margin Matrix Factorization for Collaborative Ranking

Neural Information Processing Systems

In this paper, we consider collaborative filtering as a ranking problem. We present a method which uses Maximum Margin Matrix Factorization and optimizes ranking insteadof rating. We employ structured output prediction to optimize directly for ranking scores. Experimental results show that our method gives very good ranking scores and scales well on collaborative filtering tasks.


Convex Learning with Invariances

Neural Information Processing Systems

Incorporating invariances into a learning algorithm is a common problem in machine learning.We provide a convex formulation which can deal with arbitrary loss functions and arbitrary losses. In addition, it is a drop-in replacement for most optimization algorithms for kernels, including solvers of the SVMStruct family. The advantage of our setting is that it relies on column generation instead of modifying theunderlying optimization problem directly.


Bundle Methods for Machine Learning

Neural Information Processing Systems

We present a globally convergent method for regularized risk minimization problems. Ourmethod applies to Support Vector estimation, regression, Gaussian Processes, and any other regularized risk minimization setting which leads to a convex optimization problem. SVMPerf can be shown to be a special case of our approach. In addition to the unified framework we present tight convergence bounds, which show that our algorithm converges in O(1/ɛ) steps to ɛ precision for general convex problems and in O(log(1/ɛ)) steps for continuously differentiable problems.We demonstrate in experiments the performance of our approach.


Colored Maximum Variance Unfolding

Neural Information Processing Systems

Maximum variance unfolding (MVU) is an effective heuristic for dimensionality reduction. It produces a low-dimensional representation of the data by maximizing thevariance of their embeddings while preserving the local distances of the original data. We show that MVU also optimizes a statistical dependence measure which aims to retain the identity of individual observations under the distancepreserving constraints.This general view allows us to design "colored" variants of MVU, which produce low-dimensional representations for a given task, e.g.



A Kernel Method for the Two-Sample-Problem

Neural Information Processing Systems

W e propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic.


Correcting Sample Selection Bias by Unlabeled Data

Neural Information Processing Systems

We consider the scenario where training and test data are drawn from different distributions, commonly referred to as sample selection bias. Most algorithms for this setting try to first recover sampling distributions and then make appropriate correctionsbased on the distribution estimate. We present a nonparametric method which directly produces resampling weights without distribution estimation. Ourmethod works by matching distributions between training and testing sets in feature space. Experimental results demonstrate that our method works well in practice.