Actor-Critic with variable time discretization via sustained actions
Łyskawa, Jakub, Wawrzyński, Paweł
–arXiv.org Artificial Intelligence
Reinforcement learning (RL) methods work in discrete time. In order to apply RL to inherently continuous problems like robotic control, a specific time discretization needs to be defined. This is a choice between sparse time control, which may be easier to train, and finer time control, which may allow for better ultimate performance. In this work, we propose SusACER, an off-policy RL algorithm that combines the advantages of different time discretization settings. Initially, it operates with sparse time discretization and gradually switches to a fine one. We analyze the effects of the changing time discretization in robotic control environments: Ant, HalfCheetah, Hopper, and Walker2D. In all cases our proposed algorithm outperforms state of the art.
arXiv.org Artificial Intelligence
Aug-8-2023
- Country:
- Europe > Poland
- Masovia Province > Warsaw (0.04)
- North America > United States
- California > San Diego County > San Diego (0.04)
- Europe > Poland
- Genre:
- Research Report > New Finding (0.46)
- Technology: