Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents

Xu, Wenhao, Gao, Xuefeng, He, Xuedong

arXiv.org Artificial Intelligence 

Reinforcement learning (RL) studies the problem of sequential decision making in an unknown environment by carefully balancing between exploration and exploitation (Sutton and Barto 2018). In the classical setting, it describes how an agent takes actions to maximize expected cumulative rewards in an environment typically modeled by a Markov decision process (MDP, Puterman (2014)). However, optimizing the expected cumulative rewards alone is often not sufficient in many practical applications such as finance, healthcare and robotics. Hence, it may be necessary to take into account of the risk preferences of the agent in the dynamic decision process. Indeed, a rich body of literature has studied risk-sensitive (and safe) RL, incorporating risk measures such as the entropic risk measure and conditional value-at-risk (CVaR) in the decision criterion, see, e.g., Shen et al. (2014), Garcıa and Fernández (2015), Tamar et al. (2016), Chow et al. (2017), Prashanth L and Fu (2018), Fei et al. (2020) and the references therein. In this paper we study risk-sensitive RL for tabular MDPs with unknown transition probabilities in the finite-horizon, episodic setting, where an agent interacts with the MDP in episodes of a fixed length with finite state and action spaces. To incorporate risk sensitivity, we consider a broad and important class of risk measures known as Optimized Certainty Equivalent (OCE, (Ben-Tal and Teboulle 1986, 2007)). The OCE is a (nonlinear) risk function which assigns a random variable X to a real value, and it depends on a concave utility function, see Equation (1) for the definition.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found