Goto

Collaborating Authors




But How Does It Work in Theory? Linear SVM with Random Features

Yitong Sun, Anna Gilbert, Ambuj Tewari

Neural Information Processing Systems

The random features method, proposed by Rahimi and Recht [2008], maps the data to a finite dimensional feature space as a random approximation to the feature space of RBF kernels. With explicit finite dimensional feature vectors available, the original KSVM is converted to a linear support vector machine (LSVM), that can be trained by faster algorithms (Shalev-Shwartz et al.


Coresets for Archetypal Analysis

Sebastian Mair, Ulf Brefeld

Neural Information Processing Systems

Several approaches have been proposed to remedy the edacious nature of archetypal analysis, proposing, e.g.,efficient active-set quadratic programming (Chen etal.,2014),