Thorburn, Luke
SHAPE: A Framework for Evaluating the Ethicality of Influence
Bezou-Vrakatseli, Elfia, Brückner, Benedikt, Thorburn, Luke
Agents often exert influence when interacting with humans and non-human agents. However, the ethical status of such influence is often unclear. In this paper, we present the SHAPE framework, which lists reasons why influence may be unethical. We draw on literature from descriptive and moral philosophy and connect it to machine learning to help guide ethical considerations when developing algorithms with potential influence. Lastly, we explore mechanisms for governing algorithmic systems that influence people, inspired by mechanisms used in journalism, human subject research, and advertising.
Error in the Euclidean Preference Model
Thorburn, Luke, Polukarov, Maria, Ventre, Carmine
Spatial models of preference, in the form of vector embeddings, are learned by many deep learning and multiagent systems, including recommender systems. Often these models are assumed to approximate a Euclidean structure, where an individual prefers alternatives positioned closer to their "ideal point", as measured by the Euclidean metric. However, Bogomolnaia and Laslier (2007) showed that there exist ordinal preference profiles that cannot be represented with this structure if the Euclidean space has two fewer dimensions than there are individuals or alternatives. We extend this result, showing that there are situations in which almost all preference profiles cannot be represented with the Euclidean model, and derive a theoretical lower bound on the expected error when using the Euclidean model to approximate non-Euclidean preference profiles. Our results have implications for the interpretation and use of vector embeddings, because in some cases close approximation of arbitrary, true ordinal relationships can be expected only if the dimensionality of the embeddings is a substantial fraction of the number of entities represented.
Online Handbook of Argumentation for AI: Volume 2
OHAAI Collaboration, null, Brannstrom, Andreas, Castagna, Federico, Duchatelle, Theo, Foulis, Matt, Kampik, Timotheus, Kuhlmann, Isabelle, Malmqvist, Lars, Morveli-Espinoza, Mariela, Mumford, Jack, Pandzic, Stipe, Schaefer, Robin, Thorburn, Luke, Xydis, Andreas, Yuste-Ginel, Antonio, Zheng, Heng
This volume contains revised versions of the papers selected for the second volume of the Online Handbook of Argumentation for AI (OHAAI). Previously, formal theories of argument and argument interaction have been proposed and studied, and this has led to the more recent study of computational models of argument. Argumentation, as a field within artificial intelligence (AI), is highly relevant for researchers interested in symbolic representations of knowledge and defeasible reasoning. The purpose of this handbook is to provide an open access and curated anthology for the argumentation research community. OHAAI is designed to serve as a research hub to keep track of the latest and upcoming PhD-driven research on the theory and application of argumentation in all areas related to AI.