landscape model
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States (0.04)
- North America > Canada (0.04)
- (2 more...)
48042b1dae4950fef2bd2aafa0b971a1-AuthorFeedback.pdf
Wethen define asurrogatelossLtoy(~P RD)foranetworkconfigurationP inthisweight11 space, which we choose to depend monotonically on theL2 distance to the nearestn-wedge. Together, these21 define locally ann-dimensional hyperplane of finite thickness in the remainingD nthin direction, i.e. acuboid.22 To go beyond classification, we also looked at CNN-based31 autoencoders. In all cases the results supported our landscape model and we will include them in the final version.32 R5: Radial tunnels = what low-dimensional cuts would show.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States (0.04)
- North America > Canada (0.04)
- (2 more...)
A common point you brought up
Thank you very much for your detailed reviews and comments. The simplest version of our toy landscape is constructed as follows. As such, our toy model serves us well, albeit it doesn't In real nets, we find a large number of weight-space directions in which we can move very far, while the loss doesn't We find the full low-loss manifold to be a union of those in different directions and orientations. We will include this extended discussion in the paper. In all cases the results supported our landscape model and we will include them in the final version.
Large Scale Structure of Neural Network Loss Landscapes
Fort, Stanislav, Jastrzebski, Stanislaw
There are many surprising and perhaps counter-intuitive properties of optimization of deep neural networks. We propose and experimentally verify a unified phenomenological model of the loss landscape that incorporates many of them. High dimensionality plays a key role in our model. Our core idea is to model the loss landscape as a set of high dimensional \emph{wedges} that together form a large-scale, inter-connected structure and towards which optimization is drawn. We first show that hyperparameter choices such as learning rate, network width and $L_2$ regularization, affect the path optimizer takes through the landscape in a similar ways, influencing the large scale curvature of the regions the optimizer explores. Finally, we predict and demonstrate new counter-intuitive properties of the loss-landscape. We show an existence of low loss subspaces connecting a set (not only a pair) of solutions, and verify it experimentally. Finally, we analyze recently popular ensembling techniques for deep networks in the light of our model.
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Africa > Middle East > Tunisia > Ben Arous Governorate > Ben Arous (0.04)