Goto

Collaborating Authors

 diagonal


Path-conditioned training: a principled way to rescale ReLU neural networks

Lebeurrier, Arthur, Vayer, Titouan, Gribonval, Rémi

arXiv.org Machine Learning

Despite recent algorithmic advances, we still lack principled ways to leverage the well-documented rescaling symmetries in ReLU neural network parameters. While two properly rescaled weights implement the same function, the training dynamics can be dramatically different. To offer a fresh perspective on exploiting this phenomenon, we build on the recent path-lifting framework, which provides a compact factorization of ReLU networks. We introduce a geometrically motivated criterion to rescale neural network parameters which minimization leads to a conditioning strategy that aligns a kernel in the path-lifting space with a chosen reference. We derive an efficient algorithm to perform this alignment. In the context of random network initialization, we analyze how the architecture and the initialization scale jointly impact the output of the proposed method. Numerical experiments illustrate its potential to speed up training.




Nearly Optimal Bounds for Cyclic Forgetting

Neural Information Processing Systems

One challenge of continual learning is "catastrophic forgetting" [Had+20; VT19; Kem+18]: A model However, if contexts similar to A arise repeatedly, this may be undesirable.. In machine learning, many data sets display cyclic or periodic patterns.



A Meta-Analysis of Overfitting in Machine Learning

Rebecca Roelofs, Vaishaal Shankar, Benjamin Recht, Sara Fridovich-Keil, Moritz Hardt, John Miller, Ludwig Schmidt

Neural Information Processing Systems

In each competition, numerous practitioners repeatedly evaluated their progress against a holdout set that forms the basis of a public ranking availablethroughout the competition. Performance on a separate test set used only oncedetermined the final ranking.