Who Gets the Kidney? Human-AI Alignment, Indecision, and Moral Values
Dickerson, John P., Hosseini, Hadi, Khanna, Samarth, Pierce, Leona
–arXiv.org Artificial Intelligence
The rapid integration of Large Language Models (LLMs) in high-stakes decision-making -- such as allocating scarce resources like donor organs -- raises critical questions about their alignment with human moral values. We systematically evaluate the behavior of several prominent LLMs against human preferences in kidney allocation scenarios and show that LLMs: i) exhibit stark deviations from human values in prioritizing various attributes, and ii) in contrast to humans, LLMs rarely express indecision, opting for deterministic decisions even when alternative indecision mechanisms (e.g., coin flipping) are provided. Nonetheless, we show that low-rank supervised fine-tuning with few samples is often effective in improving both decision consistency and calibrating indecision modeling. These findings illustrate the necessity of explicit alignment strategies for LLMs in moral/ethical domains.
arXiv.org Artificial Intelligence
Jun-3-2025
- Country:
- Asia
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.14)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > UAE
- Europe > United Kingdom
- England > Oxfordshire > Oxford (0.04)
- North America
- Canada > British Columbia
- Vancouver (0.04)
- United States
- California > Santa Clara County
- San Jose (0.04)
- Florida > Miami-Dade County
- Miami (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Texas > Travis County
- Austin (0.04)
- California > Santa Clara County
- Canada > British Columbia
- Asia
- Genre:
- Research Report
- Experimental Study (0.67)
- New Finding (0.67)
- Research Report
- Industry:
- Government (0.68)
- Health & Medicine > Therapeutic Area
- Nephrology (0.67)
- Technology: