Explicit loss asymptotics in the gradient descent training of neural networks

Neural Information Processing Systems 

In the present work we take a different approach and show that the learning trajectory of a wide network in a lazy training regime can be characterized by an explicit asymptotic at large training times.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found