Multi-Principal Assistance Games: Definition and Collegial Mechanisms
Fickinger, Arnaud, Zhuang, Simon, Critch, Andrew, Hadfield-Menell, Dylan, Russell, Stuart
–arXiv.org Artificial Intelligence
We introduce the concept of a multi-principal assistance game (MPAG), and circumvent an obstacle in social choice theory -- Gibbard's theorem -- by using a sufficiently "collegial" preference inference mechanism. In an MPAG, a single agent assists N human principals who may have widely different preferences. MPAGs generalize assistance games, also known as cooperative inverse reinforcement learning games. We analyze in particular a generalization of apprenticeship learning in which the humans first perform some work to obtain utility and demonstrate their preferences, and then the robot acts to further maximize the sum of human payoffs. We show in this setting that if the game is sufficiently collegial -- i.e., if the humans are responsible for obtaining a sufficient fraction of the rewards through their own actions -- then their preferences are straightforwardly revealed through their work. This revelation mechanism is non-dictatorial, does not limit the possible outcomes to two alternatives, and is dominant-strategy incentive-compatible.
arXiv.org Artificial Intelligence
Dec-28-2020