ExponentialBellmanEquationandImprovedRegret BoundsforRisk-SensitiveReinforcementLearning

Neural Information Processing Systems 

We study risk-sensitive reinforcement learning (RL) based on the entropic risk measure. Although existing works haveestablished non-asymptotic regret guarantees for this problem, they leave open an exponential gap between the upper and lower bounds. We identify the deficiencies in existing algorithms and their analysis that result in such a gap. To remedy these deficiencies, we investigate a simple transformation of the risk-sensitive Bellman equations, which we call theexponentialBellmanequation.