The Distributional Reward Critic Architecture for Perturbed-Reward Reinforcement Learning
Chen, Xi, Zhu, Zhihui, Perrault, Andrew
–arXiv.org Artificial Intelligence
We study reinforcement learning in the presence of an unknown reward perturbation. Existing methodologies for this problem make strong assumptions including reward smoothness, known perturbations, and/or perturbations that do not modify the optimal policy. We study the case of unknown arbitrary perturbations that discretize and shuffle reward space, but have the property that the true reward belongs to the most frequently observed class after perturbation. This class of perturbations generalizes existing classes (and, in the limit, all continuous bounded perturbations) and defeats existing methods. We introduce an adaptive distributional reward critic and show theoretically that it can recover the true rewards under technical conditions. Under the targeted perturbation in discrete and continuous control tasks, we win/tie the highest return in 40/57 settings (compared to 16/57 for the best baseline). Even under the untargeted perturbation, we still win an edge over the baseline designed especially for that setting. The use of reward as an objective is a central feature of reinforcement learning (RL) that has been hypothesized to constitute a path to general intelligence Silver et al. (2021). The reward is also the cause of a substantial amount of human effort associated with RL, from engineering to reduce difficulties caused by sparse, delayed, or misspecified rewards Ng et al. (1999); Hadfield-Menell et al. (2017); Qian et al. (2023) to gathering large volumes of human-labeled rewards used for tuning large language models (LLMs) Ouyang et al. (2022); Bai et al. (2022).
arXiv.org Artificial Intelligence
Jan-11-2024