Risk-Sensitive Control as Inference with Rényi Divergence
–Neural Information Processing Systems
This paper introduces the risk-sensitive control as inference (RCaI) that extends CaI by using Rényi divergence variational inference. RCaI is shown to be equivalent to log-probability regularized risk-sensitive control, which is an extension of the maximum entropy (MaxEnt) control. We also prove that the risk-sensitive optimal policy can be obtained by solving a soft Bellman equation, which reveals several equivalences between RCaI, MaxEnt control, the optimal posterior for CaI, and linearly-solvable control. Moreover, based on RCaI, we derive the risk-sensitive reinforcement learning (RL) methods: the policy gradient and the soft actor-critic. As the risk-sensitivity parameter vanishes, we recover the risk-neutral CaI and RL, which means that RCaI is a unifying framework. Furthermore, we give another risksensitive generalization of the MaxEnt control using Rényi entropy regularization. We show that in both of our extensions, the optimal policies have the same structure even though the derivations are very different.
Neural Information Processing Systems
May-30-2025, 15:59:12 GMT
- Genre:
- Research Report > Experimental Study (1.00)
- Industry:
- Information Technology (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks (0.67)
- Reinforcement Learning (0.67)
- Statistical Learning (0.66)
- Representation & Reasoning > Uncertainty (1.00)
- Robots (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence