stability and generalization analysis
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
Appendix for " Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks " A Lemmas
In this section, we collect several lemmas useful for our analysis. The proof is completed.Lemma A.2. Let W, W According to Taylor's theorem, there exists α [0, 1] such that ℓ(W; z) ℓ(W The proof is completed.The following lemma shows the self-bounding property of smooth and nonnegative functions. Let Assumptions 1, 2 hold. Let Assumptions 1, 2 hold. The remaining arguments in proving Lemma A.6 is the same as proving Lemma 5 in [ Let Assumptions 1, 2 hold.
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.