Why long model-based rollouts are no reason for bad Q-value estimates
Wissmann, Philipp, Hein, Daniel, Udluft, Steffen, Tresp, Volker
–arXiv.org Artificial Intelligence
This paper explores the use of model-based offline reinforcement learning with long model rollouts. While some literature criticizes this approach due to compounding errors, many practitioners have found success in real-world applications. The paper aims to demonstrate that long rollouts do not necessarily result in exponentially growing errors and can actually produce better Q-value estimates than model-free methods. These findings can potentially enhance reinforcement learning techniques.
arXiv.org Artificial Intelligence
Jul-16-2024
- Country:
- Europe > Germany
- Bavaria > Upper Bavaria
- Munich (0.05)
- North Rhine-Westphalia > Upper Bavaria
- Munich (0.04)
- Bavaria > Upper Bavaria
- Europe > Germany
- Genre:
- Research Report (0.40)
- Technology: