Q-learning for Quantile MDPs: A Decomposition, Performance, and Convergence Analysis
Hau, Jia Lin, Delage, Erick, Derman, Esther, Ghavamzadeh, Mohammad, Petrik, Marek
–arXiv.org Artificial Intelligence
In Markov decision processes (MDPs), quantile risk measures such as Value-at-Risk are a standard metric for modeling RL agents' preferences for certain outcomes. This paper proposes a new Q-learning algorithm for quantile optimization in MDPs with strong convergence and performance guarantees. The algorithm leverages a new, simple dynamic program (DP) decomposition for quantile MDPs. Compared with prior work, our DP decomposition requires neither known transition probabilities nor solving complex saddle point equations and serves as a suitable foundation for other model-free RL algorithms. Our numerical results in tabular domains show that our Q-learning algorithm converges to its DP variant and outperforms earlier algorithms.
arXiv.org Artificial Intelligence
Oct-31-2024
- Country:
- Europe > Austria
- Vienna (0.14)
- North America
- Canada > Quebec (0.14)
- United States (0.28)
- Europe > Austria
- Genre:
- Research Report > New Finding (0.45)
- Industry:
- Health & Medicine (0.67)
- Technology: