Gradient Descent: The Ultimate Optimizer

Chandra, Kartik, Meijer, Erik, Andow, Samantha, Arroyo-Fang, Emilio, Dea, Irene, George, Johann, Grueter, Melissa, Hosmer, Basil, Stumpos, Steffi, Tempest, Alanna, Yang, Shannon

arXiv.org Machine Learning 

Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as the learning rate. There exist many techniques for automated hyperparameter optimization, but they typically introduce even more hyperparameters to control the hyperparameter optimization process. We propose to instead learn the hyperparameters themselves by gradient descent, and furthermore to learn the hyper-hyperparameters by gradient descent as well, and so on ad infinitum. As these towers of gradient-based optimizers grow, they become significantly less sensitive to the choice of top-level hyperparameters, hence decreasing the burden on the user to search for optimal values.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found