article 22
Formalising Human-in-the-Loop: Computational Reductions, Failure Modes, and Legal-Moral Responsibility
Chiodo, Maurice, Müller, Dennis, Siewert, Paul, Wetherall, Jean-Luc, Yasmine, Zoya, Burden, John
We use the notion of oracle machines and reductions from computability theory to formalise different Human-in-the-loop (HITL) setups for AI systems, distinguishing between trivial human monitoring (i.e., total functions), single endpoint human action (i.e., many-one reductions), and highly involved human-AI interaction (i.e., Turing reductions). We then proceed to show that the legal status and safety of different setups vary greatly. We present a taxonomy to categorise HITL failure modes, highlighting the practical limitations of HITL setups. We then identify omissions in UK and EU legal frameworks, which focus on HITL setups that may not always achieve the desired ethical, legal, and sociotechnical outcomes. We suggest areas where the law should recognise the effectiveness of different HITL setups and assign responsibility in these contexts, avoiding human "scapegoating". Our work shows an unavoidable trade-off between attribution of legal responsibility, and technical explainability. Overall, we show how HITL setups involve many technical design decisions, and can be prone to failures out of the humans' control. Our formalisation and taxonomy opens up a new analytic perspective on the challenges in creating HITL setups, helping inform AI developers and lawmakers on designing HITL setups to better achieve their desired outcomes.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > France > Île-de-France > Paris > Paris (0.04)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.46)
The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR
State, Laura, Colmenarejo, Alejandra Bringas, Beretta, Andrea, Ruggieri, Salvatore, Turini, Franco, Law, Stephanie
Explainable AI (XAI) provides methods to understand non-interpretable machine learning models. However, we have little knowledge about what legal experts expect from these explanations, including their legal compliance with, and value against European Union legislation. To close this gap, we present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI, with a specific focus on the European General Data Protection Regulation. The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain. We extract both a set of hierarchical and interconnected codes using grounded theory, and present the standpoints of the participating experts towards XAI. We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject. Finally, we present a set of recommendations for developers of XAI methods, and indications of legal areas of discussion. Among others, recommendations address the presentation, choice, and content of an explanation, technical risks as well as the end-user, while we provide legal pointers to the contestability of explanations, transparency thresholds, intellectual property rights as well as the relationship between involved parties.
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.48)
GDPR and the AI Act interplay: Lessons from FPF's ADM Case-Law Report - Future of Privacy Forum
In May 2022, the Future of Privacy Forum (FPF) launched a comprehensive Report analyzing case-law under the General Data Protection Regulation (GDPR) applied to real-life cases involving Automated Decision-Making (ADM). Our research highlighted that the GDPR's protections for individuals against forms of ADM and profiling go significantly beyond Article 22 – which provides for the right of individuals not to be subject to decisions based solely on automated processing that produces legal effects or significantly impacts them, and are currently being applied by courts and Data Protection Authorities (DPAs) alike. These range from detailed transparency obligations to applying the fairness principle to avoid situations of discrimination and strict conditions for valid consent in ADM cases. As EU lawmakers are now discussing the amendments they would like to include in the European Commission (EC)'s Artificial Intelligence (AI) Act Proposal, what lessons can be drawn from GDPR enforcement precedents–as outlined in the Report–when deciding on the scope and obligations of the Act? This blog will explore: the link between the GDPR's provisions as relevant for ADM and the AI Act Proposal (1); how the AI Act's concepts of providers and users fare compared to the GDPR's controllers and processors (2); how the AI Act facilitates GDPR compliance for the deployers of AI systems (3); the opportunities to enhance or clarify obligations under the AI Act through the lens of ADM jurisprudence (4); the overlaps between GDPR enforcement precedents and the AI Act's prohibited practices or high-risk use cases (5); the issue of redress under the GDPR and the AI Act (6); and a compilation of lessons learned from the FPF Report in the context of the debates around the AI Act (7). Note: when referring to case numbers in this blog, the author is using the numbering of cases in the FPF Report.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.87)
Breaking down the AI regulations
Companies, governments, and other institutions started embedding artificial intelligence into their products, services, processes, and decision-making to a great extent. This opened great questions on how the data is used by their systems and if any, what are the implications. The answers become even more serious if we take the complex, evolving algorithms that propose health diagnosis, approve a loan, or even autonomously drive a car. Now more than ever, it is essential to develop AI tools that can be trusted and are responsible as AI has and will have wide-ranging economic impacts across manufacturing, transportation, health, education, and many other sectors. This can be done by the development of public sector policies and laws for promoting and regulating AI. It is quite a recent topic among regulators globally as between 2016 and 2020 a wave of AI regulations and guidelines were published in order to maintain social control over the use of algorithms in our everyday lives.
- Europe > United Kingdom (0.31)
- North America > United States (0.30)
- Asia > China (0.17)
- (4 more...)
- Law > Statutes (0.77)
- Government > Regional Government > Europe Government > United Kingdom Government (0.31)
UK Uber drivers are taking the algorithm to court – TechCrunch
A group of U.K. Uber drivers has launched a legal challenge against the company's subsidiary in the Netherlands. The complaints relate to access to personal data and algorithmic accountability. Uber drivers and Uber Eats couriers are being invited to join the challenge, which targets Uber's use of profiling and data-fueled algorithms to manage gig workers in Europe. Platform workers involved in the case are also seeking to exercise a broader suite of data access rights baked into EU data protection law. It looks like a fascinating test of how far existing legal protections wrap around automated decisions at a time when regional lawmakers are busy drawing up a risk-based framework for regulating applications of artificial intelligence. Many uses of AI technology look set to remain subject only to protections baked into the existing General Data Protection Regulation (GDPR).
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Transportation > Ground > Road (0.96)
AI predictions 2020: Artificial Intelligence grows up
Over the last few years, artificial intelligence (AI) has been the enfant terrible of the business world: a technology full of unconventional and sometimes controversial behaviour that has shocked, provoked and enchanted audiences worldwide. But now it's time for AI to grow up. Businesses and consumers are tired of having the same debates around the hype vs reality of AI. In 2020, I see three opportunities for this to happen across responsibility, advocacy and regulation. As AI becomes more pervasive, we're likely to see those wronged by it inspired to take action.
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Banking & Finance (0.75)
- Law > Statutes (0.73)
Making sense of the GDPR & Artificial Intelligence paradox
The General Data Protection Regulation (GDPR) came into force in May 2018, to unify and regulate how data is processed, used, stored and exchanged for citizens and residents within the European Union (EU). While this law has been in effect for some time now, it still raises multiple questions for businesses around the world. This is especially true for both those who provide and those who leverage Artificial Intelligence (AI) while conducting business in the EU. AI is dependent upon a healthy flow of data in order to drive business growth and generate valuable business insights. Article 22 of the GDPR concerns automated profiling and decision making and outlines the ramifications for the incorrect use of data in these circumstances.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
GDPR -- How does it impact AI?
The vast scope of GDPR has raised fresh challenges -- chief among them is the complex interaction between AI and the GDPR. In particular, this shines a spotlight on Article 22, which concerns automated profiling and decision-making, where the incorrect use of personal data can have huge ramifications for the individuals concerned. The problem is that existing AI system logic takes automated decisions without user consent. Since data is the engine behind AI, Article 22 impacts every industry hoping to leverage the power of technology to drive efficiencies through automated means. In an increasingly data-reliant business landscape, how can organisations reconcile the advent of disruptive technologies and their inherent risks while remaining fully compliant?
- Europe > United Kingdom (0.49)
- Asia > China (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
NYC Automated Decision-Making Task Force Forum Provides Insight Into Broader Efforts to Regulate Artificial Intelligence Lexology
More and more entities are deploying machine learning and artificial intelligence to automate tasks previously performed by humans. Such efforts carry with them real benefits, such as the enhancement of operational efficiency and the reduction of costs, but they also raise a number of concerns regarding their potential impacts on human society, particularly as computer algorithms are increasingly used to determine important outcomes like individuals' treatment within the criminal justice system. This mixture of benefits and concerns is starting to attract the interest of regulators. Efforts in the European Union, Canada, and the United States have initiated an ongoing discussion around how to regulate "automated decision-making" and what principles should guide it. And while not all of these regulatory efforts will directly implicate private companies, they may nonetheless provide insight for companies seeking to build consumer trust in their artificial intelligence systems or better prepare themselves for the overall direction that regulation is taking.
- North America > Canada (0.26)
- North America > United States > New York (0.06)
- Europe > United Kingdom (0.06)
- North America > United States > Washington (0.05)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (0.31)
Enslaving the Algorithm: From a "Right to an Explanation" to a "Right to Better Decisions"?
Edwards, Lilian, Veale, Michael
As concerns about unfairness and discrimination in "black box" machine learning systems rise, a legal "right to an explanation" has emerged as a compellingly attractive approach for challenge and redress. We outline recent debates on the limited provisions in European data protection law, and introduce and analyze newer explanation rights in French administrative law and the draft modernized Council of Europe Convention 108. While individual rights can be useful, in privacy law they have historically unreasonably burdened the average data subject. "Meaningful information" about algorithmic logics is more technically possible than commonly thought, but this exacerbates a new "transparency fallacy"---an illusion of remedy rather than anything substantively helpful. While rights-based approaches deserve a firm place in the toolbox, other forms of governance, such as impact assessments, "soft law," judicial review, and model repositories deserve more attention, alongside catalyzing agencies acting for users to control algorithmic system design.
- North America > United States (0.14)
- Europe > United Kingdom > England > Nottinghamshire > Nottingham (0.04)
- Europe > Ireland (0.04)
- (2 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.68)