selective friction
Unequal Uncertainty: Rethinking Algorithmic Interventions for Mitigating Discrimination from AI
Sargeant, Holli, Jorgensen, Mackenzie, Shah, Arina, Weller, Adrian, Bhatt, Umang
Uncertainty in artificial intelligence (AI) predictions poses urgent legal and ethical challenges for AI-assisted decision-making. We examine two algorithmic interventions that act as guardrails for human-AI collaboration: selective abstention, which withholds high-uncertainty predictions from human decision-makers, and selective friction, which delivers those predictions together with salient warnings or disclosures that slow the decision process. Research has shown that selective abstention based on uncertainty can inadvertently exacerbate disparities and disadvantage under-represented groups that disproportionately receive uncertain predictions. In this paper, we provide the first integrated socio-technical and legal analysis of uncertainty-based algorithmic interventions. Through two case studies, AI-assisted consumer credit decisions and AI-assisted content moderation, we demonstrate how the seemingly neutral use of uncertainty thresholds can trigger discriminatory impacts. We argue that, although both interventions pose risks of unlawful discrimination under UK law, selective frictions offer a promising pathway toward fairer and more accountable AI-assisted decision-making by preserving transparency and encouraging more cautious human judgment.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Malaysia (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (10 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
Modulating Language Model Experiences through Frictions
Collins, Katherine M., Chen, Valerie, Sucholutsky, Ilia, Kirk, Hannah Rose, Sadek, Malak, Sargeant, Holli, Talwalkar, Ameet, Weller, Adrian, Bhatt, Umang
Language models are transforming the ways that their users engage with the world. Despite impressive capabilities, over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term, particularly in knowledge-based tasks. How can we develop scaffolding around language models to curate more appropriate use? We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse. Frictions involve small modifications to a user's experience, e.g., the addition of a button impeding model access and reminding a user of their expertise relative to the model. Through a user study with real humans, we observe shifts in user behavior from the imposition of a friction over LLMs in the context of a multi-topic question-answering task as a representative task that people may use LLMs for, e.g., in education and information retrieval. We find that frictions modulate over-reliance by driving down users' click rates while minimally affecting accuracy for those topics. Yet, frictions may have unintended effects. We find marked differences in users' click behaviors even on topics where frictions were not provisioned. Our contributions motivate further study of human-AI behavioral interaction to inform more effective and appropriate LLM use.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > Malaysia (0.04)
- North America > United States > Virginia (0.04)
- (3 more...)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Experimental Study (0.46)
- Government (0.94)
- Education (0.93)