contestability
From Stem to Stern: Contestability Along AI Value Chains
Balayn, Agathe, Pi, Yulu, Widder, David Gray, Alfrink, Kars, Yurrita, Mireia, Upadhyay, Sohini, Karusala, Naveena, Lyons, Henrietta, Turkay, Cagatay, Tessono, Christelle, Attard-Frost, Blair, Gadiraju, Ujwal
This workshop will grow and consolidate a community of interdisciplinary CSCW researchers focusing on the topic of contestable AI. As an outcome of the workshop, we will synthesize the most pressing opportunities and challenges for contestability along AI value chains in the form of a research roadmap. This roadmap will help shape and inspire imminent work in this field. Considering the length and depth of AI value chains, it will especially spur discussions around the contestability of AI systems along various sites of such chains. The workshop will serve as a platform for dialogue and demonstrations of concrete, successful, and unsuccessful examples of AI systems that (could or should) have been contested, to identify requirements, obstacles, and opportunities for designing and deploying contestable AI in various contexts. This will be held primarily as an in-person workshop, with some hybrid accommodation. The day will consist of individual presentations and group activities to stimulate ideation and inspire broad reflections on the field of contestable AI. Our aim is to facilitate interdisciplinary dialogue by bringing together researchers, practitioners, and stakeholders to foster the design and deployment of contestable AI.
- North America > Canada > Ontario > Toronto (0.30)
- Europe > Netherlands > South Holland > Delft (0.07)
- North America > Costa Rica > San José Province > San José (0.06)
- (14 more...)
- Law (1.00)
- Information Technology > Security & Privacy (0.68)
Challenging the Machine: Contestability in Government AI Systems
Landau, Susan, Dempsey, James X., Kamar, Ece, Bellovin, Steven M., Pool, Robert
In an October 2023 executive order (EO), President Biden issued a detailed but largely aspirational road map for the safe and responsible development and use of artificial intelligence (AI). The challenge for the January 24-25, 2024 workshop was to transform those aspirations regarding one specific but crucial issue -- the ability of individuals to challenge government decisions made about themselves -- into actionable guidance enabling agencies to develop, procure, and use genuinely contestable advanced automated decision-making systems. While the Administration has taken important steps since the October 2023 EO, the insights garnered from our workshop remain highly relevant, as the requirements for contestability of advanced decision-making systems are not yet fully defined or implemented. The workshop brought together technologists, members of government agencies and civil society organizations, litigators, and researchers in an intensive two-day meeting that examined the challenges that users, developers, and agencies faced in enabling contestability in light of advanced automated decision-making systems. To ensure a free and open flow of discussion, the meeting was held under a modified version of the Chatham House rule. Participants were free to use any information or details that they learned, but they may not attribute any remarks made at the meeting by the identity or the affiliation of the speaker. Thus, the workshop summary that follows anonymizes speakers and their affiliation. Where an identification of an agency, company, or organization is made, it is done from a public, identified resource and does not necessarily reflect statements made by participants at the workshop. This document is a report of that workshop, along with recommendations and explanatory material.
- North America > United States > Idaho (0.05)
- North America > United States > Michigan (0.04)
- North America > United States > New York (0.04)
- (12 more...)
- Research Report (1.00)
- Overview (1.00)
- Instructional Material > Course Syllabus & Notes (0.34)
- Law > Statutes (1.00)
- Law > Litigation (1.00)
- Law > Civil Rights & Constitutional Law (1.00)
- (12 more...)
Contestable AI needs Computational Argumentation
Leofante, Francesco, Ayoobi, Hamed, Dejl, Adam, Freedman, Gabriel, Gorur, Deniz, Jiang, Junqi, Paulino-Passos, Guilherme, Rago, Antonio, Rapberger, Anna, Russo, Fabrizio, Yin, Xiang, Zhang, Dekai, Toni, Francesca
AI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In this position paper we explore how contestability can be achieved computationally in and for AI. We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes, whereby machines can (i) interact with humans and/or other machines to progressively explain their outputs and/or their reasoning as well as assess grounds for contestation provided by these humans and/or other machines, and (ii) revise their decision-making processes to redress any issues successfully raised during contestation. Given that much of the current AI landscape is tailored to static AIs, the need to accommodate contestability will require a radical rethinking, that, we argue, computational argumentation is ideally suited to support.
- North America > United States > New Mexico > Bernalillo County > Albuquerque (0.04)
- Europe > United Kingdom > England > Greater London > London (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
Recommendations for Government Development and Use of Advanced Automated Systems to Make Decisions about Individuals
Landau, Susan, Dempsey, James X., Kamar, Ece, Bellovin, Steven M.
Contestability -- the ability to effectively challenge a decision -- is critical to the implementation of fairness. In the context of governmental decision making about individuals, contestability is often constitutionally required as an element of due process; specific procedures may be required by state or federal law relevant to a particular program. In addition, contestability can be a valuable way to discover systemic errors, contributing to ongoing assessments and system improvement. On January 24-25, 2024, with support from the National Science Foundation and the William and Flora Hewlett Foundation, we convened a diverse group of government officials, representatives of leading technology companies, technology and policy experts from academia and the non-profit sector, advocates, and stakeholders for a workshop on advanced automated decision making, contestability, and the law. Informed by the workshop's rich and wide-ranging discussion, we offer these recommendations. A full report summarizing the discussion is in preparation.
- Law > Statutes (0.68)
- Government > Regional Government > North America Government > United States Government (0.67)
The flawed algorithm at the heart of Robodebt
Australia's Royal Commission into the Robodebt Scheme has published its findings. Various unnamed individuals are referred for potential civil or criminal investigation, but its publication is a timely reminder of the potential dangers presented by automated decision-making systems, and how the best way to mitigate their risks is by instilling a strong culture of ethics and systems for accountability in our institutions. The so-called Robodebt scheme was touted to save billions of dollars by using automation and algorithms to identify welfare fraud and overpayments. But in the end, it serves as a salient lesson in the dangers of replacing human oversight and judgement with automated decision-making. It reminds us that the basic method was not merely flawed but illegal; it was premised on the false belief of treating welfare recipients as cheats (rather than as society's most vulnerable); and it lacked both transparency and oversight.
- Government (1.00)
- Law > Criminal Law (0.35)
Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute
Alfrink, Kars, Keller, Ianus, Doorn, Neelke, Kortuem, Gerd
Local governments increasingly use artificial intelligence (AI) for automated decision-making. Contestability, making systems responsive to dispute, is a way to ensure they respect human rights to autonomy and dignity. We investigate the design of public urban AI systems for contestability through the example of camera cars: human-driven vehicles equipped with image sensors. Applying a provisional framework for contestable AI, we use speculative design to create a concept video of a contestable camera car. Using this concept video, we then conduct semi-structured interviews with 17 civil servants who work with AI employed by a large northwestern European city. The resulting data is analyzed using reflexive thematic analysis to identify the main challenges facing the implementation of contestability in public AI. We describe how civic participation faces issues of representation, public AI systems should integrate with existing democratic practices, and cities must expand capacities for responsible AI development and operation.
- Europe > Netherlands > North Holland > Amsterdam (0.07)
- Europe > Germany > Hamburg (0.05)
- Europe > Netherlands > South Holland > Delft (0.05)
- (21 more...)
- Questionnaire & Opinion Survey (0.86)
- Research Report > New Finding (0.67)
- Personal > Interview (0.66)
- Transportation > Ground > Road (1.00)
- Law (1.00)
- Government (1.00)
Context-dependent Explainability and Contestability for Trustworthy Medical Artificial Intelligence: Misclassification Identification of Morbidity Recognition Models in Preterm Infants
Guzey, Isil, Ucar, Ozlem, Ciftdemir, Nukhet Aladag, Acunas, Betul
Although machine learning (ML) models of AI achieve high performances in medicine, they are not free of errors. Empowering clinicians to identify incorrect model recommendations is crucial for engendering trust in medical AI. Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users. Several studies on biomedical imaging achieved promising results recently. Nevertheless, solutions for models using tabular data are not sufficient to meet the requirements of clinicians yet. This paper proposes a methodology to support clinicians in identifying failures of ML models trained with tabular data. We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations. We demonstrated our methodology on ML-based recognition of preterm infant morbidities caused by infection. The risk of mortality, lifelong disability, and antibiotic resistance due to model failures was an open research question in this domain. We achieved to identify misclassification cases of two models with our approach. By contextualizing local explanations, our solution provides clinicians with actionable insights to support their autonomy for informed final decisions.
- Asia > Middle East > Republic of Türkiye (0.04)
- Europe > Middle East > Republic of Türkiye > Edirne Province > Edirne (0.04)
- North America > United States > Iowa (0.04)
- Europe > United Kingdom (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis (0.68)
- (2 more...)
UK government positions itself away from EU on AI regulation while testing how light touch it can go
The UK government launched a trio of documents concerning AI on 18 July, all with the general purpose of fostering innovation, increasing public trust in the technology and giving clarity to business. But for the detail that will allow this, there is a wait until at least the end of the year when the government will publish its white paper on AI regulation, itself another pause for reflection. The department responsible is clear on the UK approach differing from that of the EU as regulation will be spread across six bodies rather than one dedicated regulator in the EU, but less clear on what the regulation will be for now other than so light-touch that it may just be guidance. In the meantime, one of the documents calls for views on regulation over the next 10 weeks. Last September, a trio of agencies (the Department for Digital, Culture, Media and Sport, the Department for Business, Engergy and Industrial Strategy and the Office for Artificial Intelligence) released the National AI Strategy guidance which promised useful developments such as a transparency standard for AI coding, something of a world-first (subsequently published in December).
- Law (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (1.00)
The Transformation of Patient-Clinician Relationships with AI-based Medical Advice
One of the dramatic trends at the intersection of computing and healthcare has been patients' increased access to medical information, ranging from self-tracked physiological data to genetic data, tests, and scans. Increasingly however, patients and clinicians have access to advanced machine learning-based tools for diagnosis, prediction, and recommendation based on large amounts of data, some of it patient-generated. Consequently, just as organizations have had to deal with a "Bring Your Own Device" (BYOD) reality5 in which employees use their personal devices (phones and tablets) for some aspects of their work, a similar reality of "Bring Your Own Algorithm" (BYOA) is emerging in healthcare with its own challenges and support demands. BYOA is changing patient-clinician interactions and the technologies, skills and workflows related to them. Situations in which patients have direct access to algorithmic advice are becoming commonplace.4
- North America > United States > New York > New York County > New York City (0.07)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
Contestable Black Boxes
Tubella, Andrea Aler, Theodorou, Andreas, Dignum, Virginia, Michael, Loizos
The right to contest a decision with consequences on individuals or the society is a well-established democratic right. Despite this right also being explicitly included in GDPR in reference to automated decision-making, its study seems to have received much less attention in the AI literature compared, for example, to the right for explanation. This paper investigates the type of assurances that are needed in the contesting process when algorithmic black boxes are involved, opening new questions about the interplay of contestability and explainability. We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed. Further, we propose a combination of well-established software engineering and rule-based approaches as a possible socio-technical solution to the issue of contestability, one of the new democratic challenges posed by the automation of decision making.
- Europe > Middle East > Cyprus (0.05)
- North America > United States > Virginia (0.05)
- Europe > Sweden > Västerbotten County > Umeå (0.04)
- Europe > Germany > North Rhine-Westphalia > Cologne Region > Bonn (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (0.89)
- Transportation > Air (0.87)
- Government > Regional Government > North America Government > United States Government (0.46)