Bounded Risk-Sensitive Markov Game and Its Inverse Reward Learning Problem
Tian, Ran, Sun, Liting, Tomizuka, Masayoshi
Classical game-theoretic approaches for multi-agent systems in both the forward policy design problem and the inverse reward learning problem often make strong rationality assumptions: agents perfectly maximize expected utilities under uncertainties. Such assumptions, however, substantially mismatch with observed humans' behaviors such as satisficing with sub-optimal, risk-seeking, and loss-aversion decisions. In this paper, we investigate the problem of bounded risk-sensitive Markov Game (BRSMG) and its inverse reward learning problem. {Drawing on iterative reasoning models and cumulative prospect theory, we embrace that humans have bounded intelligence and maximize risk-sensitive utilities in BRSMGs.} Convergence analysis for both the forward policy design and the inverse reward learning problems are established under the BRSMG framework. We also validate the proposed forward policy design and inverse reward learning algorithms in a navigation scenario. The results show that the behaviors of agents demonstrate both risk-averse and risk-seeking characteristics. Moreover, in the inverse reward learning task, the proposed bounded risk-sensitive inverse learning algorithm outperforms a baseline risk-neutral inverse learning algorithm by effectively recovering not only more accurate reward values but also the intelligence levels and the risk-measure parameters given demonstrations of agents' interactive behaviors.
Nov-8-2020
- Country:
- North America > United States (0.14)
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Education > Focused Education
- Special Education (1.00)
- Leisure & Entertainment > Games (0.93)
- Education > Focused Education
- Technology: