Goto

Collaborating Authors

 Stumpos, Steffi


Coarsening Optimization for Differentiable Programming

arXiv.org Artificial Intelligence

A program written with differentiable programming can be differentiated automatically. The differentiation results can then be used for gradient-based optimization (e.g., gradient descent) of the parameters in the program. Differentiable programming have been used in scientific computing, physics simulations, and other domains to help mitigate the burden of manual error-prone coding of derivative computations. Recent several years have witnessed a growing interest of differentiable programming in machine learning (ML) [11, 34] and Probabilistic Programming [30], to accommodate the needs of various customized ML operators, user-defined operations in the learning targets (e.g., the physical environment of reinforcement learning) and statistical sampling. The key technique in differentiable programming is automatic differentiation. For a program (P) that produces output (y) from some given values (X), automatic differentiation automatically computes the derivatives ( y/ x) (x X) without the need for users to write the differentiation code. The given program P is called the primal code, and x is called an active input variable. Existing approaches of automatic differentiation fall into two categories: (i) Symbolic differentiation, which uses expression manipulation in computer algebra systems, (ii) Algorithmic differentiation, which performs a non-standard interpretation of a given computer program by replacing the domain of the variables to incorporate derivative values and redefining the semantics of the operators to propagate derivatives per the chain rule of differential calculus (elaborated in Section 2). Symbolic differentiation has been commonly regarded inappropriate for differentiable programming, for several reasons: (i) It results in complex and cryptic expressions plagued with the problem of "expression swell" [5].


Gradient Descent: The Ultimate Optimizer

arXiv.org Machine Learning

Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as the learning rate. There exist many techniques for automated hyperparameter optimization, but they typically introduce even more hyperparameters to control the hyperparameter optimization process. We propose to instead learn the hyperparameters themselves by gradient descent, and furthermore to learn the hyper-hyperparameters by gradient descent as well, and so on ad infinitum. As these towers of gradient-based optimizers grow, they become significantly less sensitive to the choice of top-level hyperparameters, hence decreasing the burden on the user to search for optimal values.