multiscale objective function
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but not the microscopic details of the objective, the deterministic GD dynamics can become chaotic and convergent not to a local minimizer but to a statistical distribution. In this sense, deterministic GD resembles stochastic GD even though no stochasticity is injected. A sufficient condition is also established for approximating this long-time statistical limit by a rescaled Gibbs distribution, which for example allows escapes from local minima to be quantified. Both theoretical and numerical demonstrations are provided, and the theoretical part relies on the construction of a stochastic map that uses bounded noise (as opposed to Gaussian noise).
Review for NeurIPS paper: Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Additional Feedback: [After rebuttal] I appreciate the additional explanations in the rebuttal. I think the example (a more complete version) will go a long way in improving the paper, but as is presented I think not enough details is given for a proper evaluation, thus I look forward to reading a revised version of this work. Note that my tautology comment is not saying that the proof is trivial, but saying the way it is written masks the potential insights the proof may give, in particular, there should be a result that shows that such a limit in Cons 1 exists under some general conditions characterising the data and the model architecture. I believe the example provided in the rebuttal may potentially be useful for formalising this. On first reading, these conditions appear not well-motivated.
Review for NeurIPS paper: Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
Before the author response, all the reviewers seem agree that the results were quite interesting (and I agree), but had a concern about the connection to ML. The author response included examples which mostly addressed this concern, so two reviewers recommended acceptance, while another (reviewer 1) recommended rejection, but was borderline. However, I feel the remaining concerns by reviewer 1 are rather minor.
Stochasticity of Deterministic Gradient Descent: Large Learning Rate for Multiscale Objective Function
This article suggests that deterministic Gradient Descent, which does not use any stochastic gradient approximation, can still exhibit stochastic behaviors. In particular, it shows that if the objective function exhibit multiscale behaviors, then in a large learning rate regime which only resolves the macroscopic but not the microscopic details of the objective, the deterministic GD dynamics can become chaotic and convergent not to a local minimizer but to a statistical distribution. In this sense, deterministic GD resembles stochastic GD even though no stochasticity is injected. A sufficient condition is also established for approximating this long-time statistical limit by a rescaled Gibbs distribution, which for example allows escapes from local minima to be quantified. Both theoretical and numerical demonstrations are provided, and the theoretical part relies on the construction of a stochastic map that uses bounded noise (as opposed to Gaussian noise).