Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study
Ryser, Adrian, Allwein, Florian, Schlippe, Tim
–arXiv.org Artificial Intelligence
Hallucinations are outputs by Large Language Models (LLMs) that are factually incorrect yet appear plausible [1]. This paper investigates how such hallucinations influence users' trust in LLMs and users' interaction with LLMs. To explore this in everyday use, we conducted a qualitative study with 192 participants. Our findings show that hallucinations do not result in blanket mistrust but instead lead to context-sensitive trust calibration. Building on the calibrated trust model by Lee & See [2] and Afroogh et al.'s trust-related factors [3], we confirm expectancy [3], [4], prior experience [3], [4], [5], and user expertise & domain knowledge [3], [4] as userrelated (human) trust factors, and identify intuition as an additional factor relevant for hallucination detection. Additionally, we found that trust dynamics are further influenced by contextual factors, particularly perceived risk [3] and decision stakes [6]. Consequently, we validate the recursive trust calibration process proposed by Blöbaum [7] and extend it by including intuition as a user-related trust factor. Based on these insights, we propose practical recommendations for responsible and reflective LLM use.
arXiv.org Artificial Intelligence
Dec-11-2025
- Country:
- Europe
- Austria (0.04)
- Germany > Hesse
- Darmstadt Region > Wiesbaden (0.04)
- Switzerland (0.04)
- United Kingdom > England
- Oxfordshire > Oxford (0.04)
- North America > United States
- Hawaii (0.04)
- Massachusetts > Middlesex County
- Europe
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (1.00)
- Technology: