Responsibility in a Multi-Value Strategic Setting

Parker, Timothy, Grandi, Umberto, Lorini, Emiliano

arXiv.org Artificial Intelligence 

Responsibility is a key notion in multi-agent systems and in creating safe, reliable and ethical AI. In particular, the evaluation of choices based on responsibility is useful for making robustly good decisions in unpredictable domains. However, most previous work on responsibility has only considered responsibility for single outcomes, limiting its application. In this paper we present a model for responsibility attribution in a multi-agent, multi-value setting. We also expand our model to cover responsibility anticipation, demonstrating how considerations of responsibility can help an agent to select strategies that are in line with its values. In particular we show that non-dominated regret-minimising strategies reliably minimise an agent's expected degree of responsibility.