Taming Equilibrium Bias in Risk-Sensitive Multi-Agent Reinforcement Learning

Fei, Yingjie, Xu, Ruitu

arXiv.org Artificial Intelligence 

Recent advancement in reinforcement learning research has witnessed much development on multiagent reinforcement learning (MARL). However, most of the works focus on risk-neutral agents, which may not be suitable for modeling the real world. For example, in investment activities, different investors have different risk preferences depending on their roles in the market. Some act as speculators and are risk-seeking, while others are bound by regulatory constraints and are thus risk-averse. Another example is multi-player online role-playing games, where each of the players can be considered an agent. Whereas some (risk-seeking) players enjoy exploring uncharted regions in the game, others (risk-averse players) prefer to playing in areas that are well explored and come with less uncertainty. It is not hard to see that in the above examples, modeling each agent as uniformly risk-neutral is inappropriate. This naturally calls for a more sophisticated modeling framework that takes into account of heterogeneous risk preferences of agents. In this paper, we study the problem of risk-sensitive MARL under the setting of general-sum Markov games (MGs), a more realistic multi-agent model in which the agents may take different risk preferences.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found