Adaptive Proximal Gradient Method for Convex Optimization

Neural Information Processing Systems 

In this paper, we explore two fundamental first-order algorithms in convex optimization, namely, gradient descent (GD) and proximal gradient method (ProxGD). Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found