Implicit Regularization via Neural Feature Alignment

Baratin, Aristide, George, Thomas, Laurent, César, Hjelm, R Devon, Lajoie, Guillaume, Vincent, Pascal, Lacoste-Julien, Simon

arXiv.org Machine Learning 

We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features introduced by Jacot et al, along a small number of task-relevant directions. This can be interpreted as a combined mechanism of feature selection and model compression. By extrapolating a new analysis of Rademacher complexity bounds for linear models, we motivate and study a heuristic complexity measure that captures this phenomenon, in terms of sequences of tangent kernel classes along the optimization paths.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found