Fast Efficient Hyperparameter Tuning for Policy Gradients
Paul, Supratik, Kurin, Vitaly, Whiteson, Shimon
The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training (Jaderberg et al., 2017) that learn optimal schedules for hyperparameters instead of fixed settings canyield better results, but are also sample inefficient and computationally expensive. In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a gradient-free meta-learning algorithm that can automatically learn an optimal schedule for hyperparameters that affect the policy updatedirectly through the gradient. The main idea is to use existing trajectories sampled by the policy gradient method to optimise a one-step improvement objective,yielding a sample and computationally efficientalgorithm that is easy to implement. Our experimental results across multiple domains and algorithms show that using HOOF to learn these hyperparameter schedules leads to faster learning with improved performance.
Feb-18-2019
- Country:
- Asia > Middle East
- Jordan (0.04)
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America > United States
- New York
- Bronx County > New York City (0.04)
- Kings County > New York City (0.04)
- New York County > New York City (0.04)
- Queens County > New York City (0.04)
- Richmond County > New York City (0.04)
- New York
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Technology: