On Bellman equations for continuous-time policy evaluation I: discretization and approximation
–arXiv.org Artificial Intelligence
We study the problem of computing the value function from a discretely-observed trajectory of a continuous-time diffusion process. We develop a new class of algorithms based on easily implementable numerical schemes that are compatible with discrete-time reinforcement learning (RL) with function approximation. We establish high-order numerical accuracy as well as the approximation error guarantees for the proposed approach. In contrast to discrete-time RL problems where the approximation factor depends on the effective horizon, we obtain a bounded approximation factor using the underlying elliptic structures, even if the effective horizon diverges to infinity.
arXiv.org Artificial Intelligence
Jul-8-2024
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America
- Canada > Ontario
- Toronto (0.14)
- United States > California
- San Diego County > San Diego (0.04)
- Canada > Ontario
- Asia > Middle East
- Genre:
- Research Report (0.63)
- Technology: