Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
Aubin, Benjamin, Krzakala, Florent, Lu, Yue M., Zdeborová, Lenka
We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer neural network with random iid inputs. We study the generalization performances of standard classifiers in the high-dimensional regime where $\alpha=n/d$ is kept finite in the limit of a high dimension $d$ and number of samples $n$. Our contribution is three-fold: First, we prove a formula for the generalization error achieved by $\ell_2$ regularized classifiers that minimize a convex loss. This formula was first obtained by the heuristic replica method of statistical physics. Secondly, focussing on commonly used loss functions and optimizing the $\ell_2$ regularization strength, we observe that while ridge regression performance is poor, logistic and hinge regression are surprisingly able to approach the Bayes-optimal generalization error extremely closely. As $\alpha \to \infty$ they lead to Bayes-optimal rates, a fact that does not follow from predictions of margin-based generalization error bounds. Third, we design an optimal loss and regularizer that provably leads to Bayes-optimal generalization error.
Nov-7-2020
- Country:
- Europe > United Kingdom
- England (0.14)
- North America > United States (0.93)
- Europe > United Kingdom
- Genre:
- Research Report (0.83)
- Industry:
- Government > Regional Government (0.46)
- Technology: