Solving Finite-Horizon MDPs via Low-Rank Tensors
Rozada, Sergio, Orejuela, Jose Luis, Marques, Antonio G.
–arXiv.org Artificial Intelligence
We study the problem of learning optimal policies in finite-horizon Markov Decision Processes (MDPs) using low-rank reinforcement learning (RL) methods. In finite-horizon MDPs, the policies, and therefore the value functions (VFs) are not stationary. This aggravates the challenges of high-dimensional MDPs, as they suffer from the curse of dimensionality and high sample complexity. To address these issues, we propose modeling the VFs of finite-horizon MDPs as low-rank tensors, enabling a scalable representation that renders the problem of learning optimal policies tractable. We introduce an optimization-based framework for solving the Bellman equations with low-rank constraints, along with block-coordinate descent (BCD) and block-coordinate gradient descent (BCGD) algorithms, both with theoretical convergence guarantees. For scenarios where the system dynamics are unknown, we adapt the proposed BCGD method to estimate the VFs using sampled trajectories. Numerical experiments further demonstrate that the proposed framework reduces computational demands in controlled synthetic scenarios and more realistic resource allocation problems.
arXiv.org Artificial Intelligence
Jan-17-2025
- Country:
- Africa > Senegal
- Kolda Region > Kolda (0.04)
- Asia > Middle East
- Republic of Türkiye > Karaman Province > Karaman (0.04)
- Europe > Spain
- North America > United States (0.04)
- Africa > Senegal
- Genre:
- Research Report (0.64)
- Industry:
- Energy (1.00)