Over-parameterization as a Catalyst for Better Generalization of Deep ReLU network
A BSTRACT To analyze deep ReLU network, we adopt a student-teacher setting in which an over-parameterized student network learns from the output of a fixed teacher network of the same depth, with Stochastic Gradient Descent (SGD). First, we prove that when the gradient is zero (or bounded above by a small constant) at every data point in training, a situation called interpolation setting, there exists many-to-one alignment between student and teacher nodes in the lowest layer under mild conditions. This suggests that generalization in unseen dataset is achievable, even the same condition often leads to zero training error. Second, analysis of noisy recovery and training dynamics in 2-layer network shows that strong teacher nodes (with large fan-out weights) are learned first and subtle teacher nodes are left unlearned until late stage of training. As a result, it could take a long time to converge into these small-gradient critical points. Our analysis shows that over-parameterization plays two roles: (1) it is a necessary condition for alignment to happen at the critical points, and (2) in training dynamics, it helps student nodes cover more teacher nodes with fewer iterations. Although networks with even one-hidden layer can fit any function (Hornik et al., 1989), it remains an open question how such networks can generalize to new data. Different from what traditional machine learning theory predicts, empirical evidence (Zhang et al., 2017) shows more parameters in neural network lead to better generalization. How over-parameterization yields strong generalization is an important question for understanding how deep learning works. In this paper, we analyze multi-layer ReLU networks by adopting teacher-student setting. The fixed teacher network provides the output for the student to learn via SGD. The student is over-parameterized (or over-realized): it has more nodes than the teacher. Therefore, there exists student weights whose gradient at every data point is zero. Here, we want to study the inverse problem: With small gradient at every training sample, can the student weights recover the teachers'? If so, then the generalization performance can be guaranteed if the training converges to such critical points. In this paper, we show that this so-called interpolation setting (Ma et al., 2017; Liu & Belkin, 2018; Bassily et al., 2018) leads to alignment: under certain conditions, each teacher node is provably aligned with at least one student node in the lowest layer. The condition is simply that the teacher node is observed by at least one student node, i.e., teacher's ReLU boundary lies in the activation region of that student. Therefore, more over-parameterization increases the probability of teachers being observed and thus being aligned. Furthermore, in 2-layer case, those student nodes that are not aligned with any teacher have zero contribution to the output and can be pruned.
Oct-17-2019
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- California > Los Angeles County > Long Beach (0.04)
- Asia > Middle East
- Genre:
- Research Report (0.50)
- Industry:
- Education (0.66)
- Materials > Chemicals
- Specialty Chemicals (0.40)
- Technology: