rfsvm
But How Does It Work in Theory? Linear SVM with Random Features
Yitong Sun, Anna Gilbert, Ambuj Tewari
The random features method, proposed by Rahimi and Recht [2008], maps the data to a finite dimensional feature space as a random approximation to the feature space of RBF kernels. With explicit finite dimensional feature vectors available, the original KSVM is converted to a linear support vector machine (LSVM), that can be trained by faster algorithms (Shalev-Shwartz et al.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Spain > Canary Islands (0.04)
But How Does It Work in Theory? Linear SVM with Random Features
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
But How Does It Work in Theory? Linear SVM with Random Features
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Spain > Canary Islands (0.04)
Random Forest Kernel for High-Dimension Low Sample Size Classification
Cavalheiro, Lucca Portes, Bernard, Simon, Barddal, Jean Paul, Heutte, Laurent
High dimension, low sample size (HDLSS) problems are numerous among real-world applications of machine learning. From medical images to text processing, traditional machine learning algorithms are usually unsuccessful in learning the best possible concept from such data. In a previous work, we proposed a dissimilarity-based approach for multi-view classification, the Random Forest Dissimilarity (RFD), that perfoms state-of-the-art results for such problems. In this work, we transpose the core principle of this approach to solving HDLSS classification problems, by using the RF similarity measure as a learned precomputed SVM kernel (RFSVM). We show that such a learned similarity measure is particularly suited and accurate for this classification context. Experiments conducted on 40 public HDLSS classification datasets, supported by rigorous statistical analyses, show that the RFSVM method outperforms existing methods for the majority of HDLSS problems and remains at the same time very competitive for low or non-HDLSS problems.
- North America > United States > New York > New York County > New York City (0.04)
- Europe > France > Normandy > Seine-Maritime > Rouen (0.04)
- South America > Brazil > Paraná > Curitiba (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.68)
- Health & Medicine > Diagnostic Medicine > Imaging (0.34)
But How Does It Work in Theory? Linear SVM with Random Features
Sun, Yitong, Gilbert, Anna, Tewari, Ambuj
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set. Papers published at the Neural Information Processing Systems Conference.
But How Does It Work in Theory? Linear SVM with Random Features
Sun, Yitong, Gilbert, Anna, Tewari, Ambuj
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Spain > Canary Islands (0.04)
But How Does It Work in Theory? Linear SVM with Random Features
Sun, Yitong, Gilbert, Anna, Tewari, Ambuj
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Europe > Spain > Canary Islands (0.04)
But How Does It Work in Theory? Linear SVM with Random Features
Gilbert, Anna, Tewari, Ambuj, Sun, Yitong
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Spain > Canary Islands (0.04)