On Multi-objective Policy Optimization as a Tool for Reinforcement Learning: Case Studies in Offline RL and Finetuning

Abdolmaleki, Abbas, Huang, Sandy H., Vezzani, Giulia, Shahriari, Bobak, Springenberg, Jost Tobias, Mishra, Shruti, TB, Dhruva, Byravan, Arunkumar, Bousmalis, Konstantinos, Gyorgy, Andras, Szepesvari, Csaba, Hadsell, Raia, Heess, Nicolas, Riedmiller, Martin

arXiv.org Artificial Intelligence 

Many advances that have improved the robustness and efficiency of deep reinforcement learning (RL) algorithms can, in one way or another, be understood as introducing additional objectives or constraints in the policy optimization step. This includes ideas as far ranging as exploration bonuses, entropy regularization, and regularization toward teachers or data priors. Often, the task reward and auxiliary objectives are in conflict, and in this paper we argue that this makes it natural to treat these cases as instances of multi-objective (MO) optimization problems. We demonstrate how this perspective allows us to develop novel and more effective RL algorithms. In particular, we focus on offline RL and finetuning as case studies, and show that existing approaches can be understood as MO algorithms relying on linear scalarization. We hypothesize that replacing linear scalarization with a better algorithm can improve performance. We introduce Distillation of a Mixture of Experts (DiME), a new MORL algorithm that outperforms linear scalarization and can be applied to these non-standard MO problems. We demonstrate that for offline RL, DiME leads to a simple new algorithm that outperforms state-of-the-art. For finetuning, we derive new algorithms that learn to outperform the teacher policy. Deep reinforcement learning (RL) algorithms have solved a number of challenging problems, including in games (Mnih et al., 2015; Silver et al., 2016), simulated continuous control (Heess et al., 2017; Peng et al., 2018), and robotics (OpenAI et al., 2018). The standard RL setting appeals through its simplicity: an agent acts in the environment and can discover complex solutions simply by maximizing cumulative discounted reward. In practice, however, the situation is often more complicated. For instance, without a carefully crafted reward function or sophisticated exploration strategy, learning may require hundreds of millions of environment interactions, or may not be possible at all. A number of strategies have been developed to mitigate the shortcomings of the pure RL paradigm. These include strategies that regularize the final solution, for instance by maximizing auxiliary rewards (Jaderberg et al., 2017) or the entropy of the policy (Mnih et al., 2016; Haarnoja et al., 2018).

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found