Doing the right thing for the right reason: Evaluating artificial moral cognition by probing cost insensitivity
Mao, Yiran, Reinecke, Madeline G., Kunesch, Markus, Duéñez-Guzmán, Edgar A., Comanescu, Ramona, Haas, Julia, Leibo, Joel Z.
–arXiv.org Artificial Intelligence
Is it possible to evaluate the moral cognition of complex artificial agents? In this work, we take a look at one aspect of morality: `doing the right thing for the right reasons.' We propose a behavior-based analysis of artificial moral cognition which could also be applied to humans to facilitate like-for-like comparison. Morally-motivated behavior should persist despite mounting cost; by measuring an agent's sensitivity to this cost, we gain deeper insight into underlying motivations. We apply this evaluation to a particular set of deep reinforcement learning agents, trained by memory-based meta-reinforcement learning. Our results indicate that agents trained with a reward function that includes other-regarding preferences perform helping behavior in a way that is less sensitive to increasing cost than agents trained with more self-interested preferences.
arXiv.org Artificial Intelligence
May-29-2023
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Therapeutic Area > Neurology (0.46)
- Technology: