Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
Pokharel, Gaurab, Farabi, Shafkat, Fowler, Patrick J., Das, Sanmay
–arXiv.org Artificial Intelligence
A surge of recent work explores the ethical and societal implications of large-scale AI models that make "moral" judgments. Much of this literature focuses either on alignment with human judgments through various thought experiments or on the group fairness implications of AI judgments. However, the most immediate and likely use of AI is to help or fully replace the so-called street-level bureaucrats, the individuals deciding to allocate scarce social resources or approve benefits. There is a rich history underlying how principles of local justice determine how society decides on prioritization mechanisms in such domains. In this paper, we examine how well LLM judgments align with human judgments, as well as with socially and politically determined vulnerability scoring systems currently used in the domain of homelessness resource allocation. Crucially, we use real data on those needing services (maintaining strict confidentiality by only using local large models) to perform our analyses. We find that LLM prioritizations are extremely inconsistent in several ways: internally on different runs, between different LLMs, and between LLMs and the vulnerability scoring systems. At the same time, LLMs demonstrate qualitative consistency with lay human judgments in pairwise testing.
arXiv.org Artificial Intelligence
Sep-5-2025
- Country:
- Asia > Singapore (0.04)
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States > Virginia (0.04)
- Mexico > Mexico City
- Genre:
- Research Report
- Experimental Study (0.69)
- New Finding (0.46)
- Research Report
- Industry:
- Government (1.00)
- Health & Medicine
- Consumer Health (1.00)
- Health Care Providers & Services (0.93)
- Therapeutic Area > Psychiatry/Psychology (1.00)
- Information Technology (0.93)
- Law (1.00)
- Technology: