Goto

Collaborating Authors

 kernel function


Kernel functions based on triplet comparisons

Neural Information Processing Systems

Given only information in the form of similarity triplets Object A is more similar to object B than to object C about a data set, we propose two ways of defining a kernel function on the data set. While previous approaches construct a low-dimensional Euclidean embedding of the data set that reflects the given similarity triplets, we aim at defining kernel functions that correspond to high-dimensional embeddings. These kernel functions can subsequently be used to apply any kernel method to the data set.








But How Does It Work in Theory? Linear SVM with Random Features

Yitong Sun, Anna Gilbert, Ambuj Tewari

Neural Information Processing Systems

The random features method, proposed by Rahimi and Recht [2008], maps the data to a finite dimensional feature space as a random approximation to the feature space of RBF kernels. With explicit finite dimensional feature vectors available, the original KSVM is converted to a linear support vector machine (LSVM), that can be trained by faster algorithms (Shalev-Shwartz et al.


Statistical and Computational Trade-Offs in Kernel K-Means

Daniele Calandriello, Lorenzo Rosasco

Neural Information Processing Systems

More precisely, we study a Nyström approach to kernel k-means. Weanalyze thestatistical properties oftheproposed method andshow that it achieves the same accuracy of exact kernel k-means with only a fraction of computations.