Neural Networks as Kernel Learners: The Silent Alignment Effect
Atanasov, Alexander, Bordelon, Blake, Pehlevan, Cengiz
Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that the tangent kernel of a network evolves in eigenstructure while small and before the loss appreciably decreases, and grows only in overall scale afterwards. We empirically show that such an effect takes place in homogenous neural networks with small initialization and whitened data. We provide an analytical treatment of this effect in the fully connected linear network case. In general, we find that the kernel develops a low-rank contribution in the early phase of training, and then evolves in overall scale, yielding a function equivalent to a kernel regression solution with the final network's tangent kernel. The early spectral learning of the kernel depends on the depth. We also demonstrate that non-whitened data can weaken the silent alignment effect. Despite the numerous empirical successes of deep learning, much of the underlying theory remains poorly understood. One promising direction forward to an interpretable account of deep learning is in the study of the relationship between deep neural networks and kernel machines. Several studies in recent years have shown that gradient flow on infinitely wide neural networks with a certain parameterization gives rise to linearized dynamics in parameter space (Lee et al., 2019; Liu et al., 2020) and consequently a kernel regression solution with a kernel known as the neural tangent kernel (NTK) in function space (Jacot et al., 2018; Arora et al., 2019).
Dec-2-2021