Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks Yunwen Lei
–Neural Information Processing Systems
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping.
Neural Information Processing Systems
Aug-19-2025, 21:46:28 GMT
- Country:
- Asia > China
- Europe > United Kingdom
- England > Cambridgeshire > Cambridge (0.04)
- North America > United States (0.04)
- Genre:
- Research Report > New Finding (0.46)
- Technology: