A Survey of Meta-Reinforcement Learning
Beck, Jacob, Vuorio, Risto, Liu, Evan Zheran, Xiong, Zheng, Zintgraf, Luisa, Finn, Chelsea, Whiteson, Shimon
–arXiv.org Artificial Intelligence
While deep reinforcement learning (RL) has fueled multiple high-profile successes in machine learning, it is held back from more widespread adoption by its often poor data efficiency and the limited generality of the policies it produces. A promising approach for alleviating these limitations is to cast the development of better RL algorithms as a machine learning problem itself in a process called meta-RL. Meta-RL is most commonly studied in a problem setting where, given a distribution of tasks, the goal is to learn a policy that is capable of adapting to any new task from the task distribution with as little data as possible. In this survey, we describe the meta-RL problem setting in detail as well as its major variations. We discuss how, at a high level, meta-RL research can be clustered based on the presence of a task distribution and the learning budget available for each individual task. Using these clusters, we then survey meta-RL algorithms and applications. We conclude by presenting the open problems on the path to making meta-RL part of the standard toolbox for a deep RL practitioner.
arXiv.org Artificial Intelligence
Jan-19-2023
- Country:
- North America > United States (0.92)
- Genre:
- Overview (1.00)
- Research Report > Promising Solution (0.65)
- Industry:
- Education (1.00)
- Energy > Oil & Gas (0.93)
- Leisure & Entertainment > Games (0.92)
- Technology: