Veale, Michael
Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries
Gorwa, Robert, Veale, Michael
The AI development community is increasingly making use of hosting intermediaries such as Hugging Face provide easy access to user-uploaded models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways. In this article, we explain ways in which AI systems, which can both `contain' content and be open-ended tools, present one of the trickiest platform governance challenges seen to date. We provide case studies of several incidents across three illustrative platforms -- Hugging Face, GitHub and Civitai -- to examine how model marketplaces moderate models. Building on this analysis, we outline important (and yet nevertheless limited) practices that industry has been developing to respond to moderation demands: licensing, access and use restrictions, automated content moderation, and open policy development. While the policy challenge at hand is a considerable one, we conclude with some ideas as to how platforms could better mobilize resources to act as a careful, fair, and proportionate regulatory access point.
Understanding accountability in algorithmic supply chains
Cobbe, Jennifer, Veale, Michael, Singh, Jatinder
Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by 'many hands'. Increasingly, however, algorithmic systems are also produced, deployed, and used within a supply chain comprising multiple actors tied together by flows of data between them. In such cases, it is the working together of an algorithmic supply chain of different actors who contribute to the production, deployment, use, and functionality that drives systems and produces particular outcomes. We argue that algorithmic accountability discussions must consider supply chains and the difficult implications they raise for the governance and accountability of algorithmic systems. In doing so, we explore algorithmic supply chains, locating them in their broader technical and political economic context and identifying some key features that should be understood in future work on algorithmic governance and accountability (particularly regarding general purpose AI services). To highlight ways forward and areas warranting attention, we further discuss some implications raised by supply chains: challenges for allocating accountability stemming from distributed responsibility for systems between actors, limited visibility due to the accountability horizon, service models of use and liability, and cross-border supply chains and regulatory arbitrage
Demystifying the Draft EU Artificial Intelligence Act
Veale, Michael, Borgesius, Frederik Zuiderveen
Thanks to Valerio De Stefano, Reuben Binns, Jeremias Adams-Prassl, Barend van Leeuwen, Aislinn Kelly-Lyth, Lilian Edwards, Natali Helberger, Christopher Marsden, Sarah Chander, Corinne Cath-Speth for comments and/or discussion; substantive and editorial input by Ulrich Gasper; and the conveners and participants of several workshops including one convened by Margot Kaminski, one by Burkhard Schäfer, one part of the 2nd ELLIS Workshop in Human-Centric Machine Learning; one between Lund University and the Labour Law Community; and one between Oxford, KU Leuven and UCL. A CC-BY 4.0 license applies to this article after 3 calendar months from publication have elapsed.
Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?
Edwards, Lilian, Veale, Michael
As concerns about unfairness and discrimination in "black box" machine learning systems rise, a legal "right to an explanation" has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyze newer explanation rights in French administrative law and the draft modernized Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. "Meaningful information" about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new "transparency fallacy"---an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, "soft law," judicial review, and model repositories deserve more attention, alongside catalyzing agencies acting for users to control algorithmic system design.
Blind Justice: Fairness with Encrypted Sensitive Attributes
Kilbertus, Niki, Gascón, Adrià, Kusner, Matt J., Veale, Michael, Gummadi, Krishna P., Weller, Adrian
Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined, e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcome-based fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes.