(Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court
–arXiv.org Artificial Intelligence
Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We present findings from interviews with 17 people in the U.S. public defense community to understand their perceptions of and experiences scrutinizing computational forensic software (CFS) -- automated decision systems that the government uses to convict and incarcerate, such as facial recognition, gunshot detection, and probabilistic genotyping tools. We find that our participants faced challenges assessing and contesting CFS reliability due to difficulties (a) navigating how CFS is developed and used, (b) overcoming judges and jurors' non-critical perceptions of CFS, and (c) gathering CFS expertise. To conclude, we provide recommendations that center the technical, social, and institutional context to better position interventions such as performance evaluations to support contestability in practice.
arXiv.org Artificial Intelligence
Mar-13-2024
- Country:
- Europe > Netherlands
- North Holland > Amsterdam (0.04)
- North America
- Cuba (0.04)
- United States
- California > Alameda County
- Berkeley (0.04)
- Hawaii > Honolulu County
- Honolulu (0.06)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- New York > New York County
- New York City (0.04)
- Virginia (0.04)
- California > Alameda County
- Europe > Netherlands
- Genre:
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (1.00)
- Industry:
- Technology:
- Information Technology
- Artificial Intelligence
- Applied AI (0.66)
- Issues > Social & Ethical Issues (0.93)
- Machine Learning (1.00)
- Representation & Reasoning (1.00)
- Vision (0.68)
- Communications > Social Media (0.67)
- Human Computer Interaction > Interfaces (0.68)
- Artificial Intelligence
- Information Technology