Learning on One Mode: Addressing Multi-Modality in Offline Reinforcement Learning

Wang, Mianchu, Jin, Yue, Montana, Giovanni

arXiv.org Artificial Intelligence 

Offline reinforcement learning (RL) enables policy learning from static datasets, without active environment interaction, making it ideal for high-stakes applications like autonomous driving and robot manipulation [Levine et al., 2020, Ma et al., 2022, Wang et al., 2024a]. A key challenge in offline RL is managing the discrepancy between the learned policy and the behaviour policy that generated the dataset. Small discrepancies can hinder policy improvement, while large discrepancies push the learned policy into uncharted areas, causing significant extrapolation errors and poor generalisation [Fujimoto et al., 2019, Yang et al., 2023]. Addressing these challenges, existing research has proposed various solutions. Conservative approaches penalise actions that stray into out-of-distribution (OOD) regions [Yu et al., 2020, Kumar et al., 2020], while others regularise the policy by minimising its divergence from the behaviour policy, ensuring better fidelity to the dataset [Fujimoto and Gu, 2021, Wu et al., 2019].