But How Does It Work in Theory? Linear SVM with Random Features
Gilbert, Anna, Tewari, Ambuj, Sun, Yitong
We prove that, under low noise assumptions, the support vector machine with $N\ll m$ random features (RFSVM) can achieve the learning rate faster than $O(1/\sqrt{m})$ on a training set with $m$ samples when an optimized feature map is used. Our work extends the previous fast rate analysis of random features method from least square loss to 0-1 loss. We also show that the reweighted feature selection method, which approximates the optimized feature map, helps improve the performance of RFSVM in experiments on a synthetic data set.
Sep-12-2018
- Country:
- Europe > Spain
- Canary Islands (0.04)
- North America > United States
- Michigan > Washtenaw County
- Ann Arbor (0.14)
- New York > New York County
- New York City (0.04)
- Michigan > Washtenaw County
- Europe > Spain
- Genre:
- Research Report (0.82)
- Technology: