Multi-timestep models for Model-based Reinforcement Learning
Benechehab, Abdelhakim, Paolo, Giuseppe, Thomas, Albert, Filippone, Maurizio, Kégl, Balázs
In model-based reinforcement learning (MBRL), most algorithms rely on simulating trajectories from one-step dynamics models learned on data. A critical challenge of this approach is the compounding of one-step prediction errors as length of the trajectory grows. In this paper we tackle this issue by using a multi-timestep objective to train one-step models. Our objective is a weighted sum of a loss function (e.g., negative log-likelihood) at various future horizons. We explore and test a range of weights profiles. We find that exponentially decaying weights lead to models that significantly improve the long-horizon R2 score. This improvement is particularly noticeable when the models were evaluated on noisy data. Finally, using a soft actor-critic (SAC) agent in pure batch reinforcement learning (RL) and iterated batch RL scenarios, we found that our multi-timestep models outperform or match standard one-step models. This was especially evident in a noisy variant of the considered environment, highlighting the potential of our approach in real-world applications.
Oct-11-2023
- Country:
- North America > United States > California (0.28)
- Genre:
- Research Report (0.65)
- Industry:
- Leisure & Entertainment (0.46)
- Technology: