Stochastic Newton Proximal Extragradient Method
–Neural Information Processing Systems
Stochastic second-order methods achieve fast local convergence in strongly convex optimization by using noisy Hessian estimates to precondition the gradient. However, these methods typically reach superlinear convergence only when the stochastic Hessian noise diminishes, increasing per-iteration costs over time. Recent work in [1] addressed this with a Hessian averaging scheme that achieves superlinear convergence without higher per-iteration costs.
Neural Information Processing Systems
May-31-2025, 19:57:05 GMT
- Country:
- North America > United States (0.14)
- Genre:
- Research Report > Experimental Study (0.93)
- Technology: