Speculative Model Risk in Healthcare AI: Using Storytelling to Surface Unintended Harms
Zhao, Xingmeng, Schumacher, Dan, Rammouz, Veronica, Rios, Anthony
–arXiv.org Artificial Intelligence
Artificial intelligence (AI) is rapidly transforming healthcare, enabling fast development of tools like stress monitors, wellness trackers, and mental health chatbots. However, rapid and low-barrier development can introduce risks of bias, privacy violations, and unequal access, especially when systems ignore real-world contexts and diverse user needs. Many recent methods use AI to detect risks automatically, but this can reduce human engagement in understanding how harms arise and who they affect. We present a human-centered framework that generates user stories and supports multi-agent discussions to help people think creatively about potential benefits and harms before deployment. In a user study, participants who read stories recognized a broader range of harms, distributing their responses more evenly across all 13 harm types. In contrast, those who did not read stories focused primarily on privacy and well-being (58.3%). Our findings show that storytelling helped participants speculate about a broader range of harms and benefits and think more creatively about AI's impact on users.
arXiv.org Artificial Intelligence
Oct-17-2025
- Country:
- Africa > Eswatini
- Asia
- Middle East > Jordan (0.05)
- Singapore (0.04)
- Europe
- Germany > Hamburg (0.04)
- Italy > Tuscany
- Florence (0.04)
- Middle East > Malta
- Eastern Region > Northern Harbour District > St. Julian's (0.04)
- Switzerland > Basel-City
- Basel (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California > San Diego County
- San Diego (0.04)
- Hawaii > Honolulu County
- Honolulu (0.04)
- Texas (0.04)
- California > San Diego County
- Canada > British Columbia
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Health & Medicine
- Consumer Health (1.00)
- Health Care Technology (1.00)
- Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine
- Technology:
- Information Technology > Artificial Intelligence
- Applied AI (1.00)
- Issues > Social & Ethical Issues (1.00)
- Machine Learning > Neural Networks
- Deep Learning (0.95)
- Natural Language
- Chatbot (1.00)
- Large Language Model (1.00)
- Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence