Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers
–Neural Information Processing Systems
The fundamental learning theory behind neural networks remains largely open. What classes of functions can neural networks actually learn? Why doesn't the trained network overfit when it is overparameterized? In this work, we prove that overparameterized neural networks can learn some notable concept classes, including two and three-layer networks with fewer parameters and smooth activations. Moreover, the learning can be simply done by SGD (stochastic gradient descent) or its variants in polynomial time using polynomially many samples.
Neural Information Processing Systems
Oct-10-2024, 04:17:18 GMT
- Technology: