Isaac, William S.
A theory of appropriateness with applications to generative artificial intelligence
Leibo, Joel Z., Vezhnevets, Alexander Sasha, Diaz, Manfred, Agapiou, John P., Cunningham, William A., Sunehag, Peter, Haas, Julia, Koster, Raphael, Duéñez-Guzmán, Edgar A., Isaac, William S., Piliouras, Georgios, Bileschi, Stanley M., Rahwan, Iyad, Osindero, Simon
What is appropriateness? Humans navigate a multi-scale mosaic of interlocking notions of what is appropriate for different situations. We act one way with our friends, another with our family, and yet another in the office. Likewise for AI, appropriate behavior for a comedy-writing assistant is not the same as appropriate behavior for a customer-service representative. What determines which actions are appropriate in which contexts? And what causes these standards to change over time? Since all judgments of AI appropriateness are ultimately made by humans, we need to understand how appropriateness guides human decision making in order to properly evaluate AI decision making and improve it. This paper presents a theory of appropriateness: how it functions in human society, how it may be implemented in the brain, and what it means for responsible deployment of generative AI technology.
Modelling Cooperation in Network Games with Spatio-Temporal Complexity
Bakker, Michiel A., Everett, Richard, Weidinger, Laura, Gabriel, Iason, Isaac, William S., Leibo, Joel Z., Hughes, Edward
The real world is awash with multi-agent problems that require collective action by self-interested agents, from the routing of packets across a computer network to the management of irrigation systems. Such systems have local incentives for individuals, whose behavior has an impact on the global outcome for the group. Given appropriate mechanisms describing agent interaction, groups may achieve socially beneficial outcomes, even in the face of short-term selfish incentives. In many cases, collective action problems possess an underlying graph structure, whose topology crucially determines the relationship between local decisions and emergent global effects. Such scenarios have received great attention through the lens of network games. However, this abstraction typically collapses important dimensions, such as geometry and time, relevant to the design of mechanisms promoting cooperation. In parallel work, multi-agent deep reinforcement learning has shown great promise in modelling the emergence of self-organized cooperation in complex gridworld domains. Here we apply this paradigm in graph-structured collective action problems. Using multi-agent deep reinforcement learning, we simulate an agent society for a variety of plausible mechanisms, finding clear transitions between different equilibria over time. We define analytic tools inspired by related literatures to measure the social outcomes, and use these to draw conclusions about the efficacy of different environmental interventions. Our methods have implications for mechanism design in both human and artificial agent systems.
Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics
Martin, Donald Jr., Prabhakaran, Vinodkumar, Kuhlberg, Jill, Smart, Andrew, Isaac, William S.
Recent research on algorithmic fairness has highlighted that the problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes. However, very little attention has been paid to methods for improving the fairness efficacy of this critical phase of ML system development. Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders. In this paper we introduce community based system dynamics (CBSD) as an approach to enable the participation of typically excluded stakeholders in the problem formulation phase of the ML system development process and facilitate the deep problem understanding required to mitigate bias during this crucial stage. Problem formulation is a crucial first step in any machine learning (ML) based interventions that have the potential of impacting the real lives of people; a step that involves determining the strategic goals driving the interventions and translating those strategic goals into tractable machine learning problems (Barocas et al., 2017; Passi & Barocas, 2019).
A Causal Bayesian Networks Viewpoint on Fairness
Chiappa, Silvia, Isaac, William S.
We offer a graphical interpretation of unfairness in a dataset as the presence of an unfair causal path in the causal Bayesian network representing the data-generation mechanism. We use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data. We show that causal Bayesian networks provide us with a powerful tool to measure unfairness in a dataset and to design fair models in complex unfairness scenarios.