Trust in Vision-Language Models: Insights from a Participatory User Workshop
Chiatti, Agnese, Piccolo, Lara, Bernardini, Sara, Matteucci, Matteo, Schiaffonati, Viola
–arXiv.org Artificial Intelligence
With the growing deployment of Vision-Language Models (VLMs), pre-trained on large image-text and video-text datasets, it is critical to equip users with the tools to discern when to trust these systems. However, examining how user trust in VLMs builds and evolves remains an open problem. This problem is exacerbated by the increasing reliance on AI models as judges for experimental validation, to bypass the cost and implications of running participatory design studies directly with users. Following a user-centred approach, this paper presents preliminary results from a workshop with prospective VLM users. Insights from this pilot workshop inform future studies aimed at contextualising trust metrics and strategies for participants' engagement to fit the case of user-VLM interaction.
arXiv.org Artificial Intelligence
Nov-18-2025
- Country:
- Europe
- Germany (0.04)
- Italy
- Emilia-Romagna > Metropolitan City of Bologna
- Bologna (0.04)
- Lombardy > Milan (0.04)
- Emilia-Romagna > Metropolitan City of Bologna
- United Kingdom > England
- Oxfordshire > Oxford (0.14)
- North America > United States
- New York (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.68)
- Industry:
- Education (0.46)
- Health & Medicine (0.68)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning > Neural Networks
- Deep Learning (0.47)
- Natural Language (1.00)
- Representation & Reasoning (1.00)
- Robots (1.00)
- Vision (1.00)
- Machine Learning > Neural Networks
- Information Technology > Artificial Intelligence