On the consistency of hyper-parameter selection in value-based deep reinforcement learning
Obando-Ceron, Johan, Araújo, João G. M., Courville, Aaron, Castro, Pablo Samuel
–arXiv.org Artificial Intelligence
Deep reinforcement learning (deep RL) has achieved tremendous success on various domains through a combination of algorithmic design and careful selection of hyper-parameters. Algorithmic improvements are often the result of iterative enhancements built upon prior approaches, while hyper-parameter choices are typically inherited from previous methods or fine-tuned specifically for the proposed technique. Despite their crucial impact on performance, hyper-parameter choices are frequently overshadowed by algorithmic advancements. This paper conducts an extensive empirical study focusing on the reliability of hyper-parameter selection for value-based deep reinforcement learning agents, including the introduction of a new score to quantify the consistency and reliability of various hyper-parameters. Our findings not only help establish which hyper-parameters are most critical to tune, but also help clarify which tunings remain consistent across different training regimes.
arXiv.org Artificial Intelligence
Jul-2-2024
- Country:
- Europe > Netherlands
- North Holland > Amsterdam (0.04)
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > California
- San Diego County > San Diego (0.04)
- San Francisco County > San Francisco (0.14)
- Canada > Quebec
- Europe > Netherlands
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Leisure & Entertainment > Games (0.68)
- Technology: