VIGNETTE: Socially Grounded Bias Evaluation for Vision-Language Models
Raj, Chahat, Wei, Bowen, Caliskan, Aylin, Anastasopoulos, Antonios, Zhu, Ziwei
–arXiv.org Artificial Intelligence
While bias in large language models (LLMs) is well-studied, similar concerns in vision-language models (VLMs) have received comparatively less attention. Existing VLM bias studies often focus on portrait-style images and gender-occupation associations, overlooking broader and more complex social stereotypes and their implied harm. This work introduces VIGNETTE, a large-scale VQA benchmark with 30M+ images for evaluating bias in VLMs through a question-answering framework spanning four directions: factuality, perception, stereotyping, and decision making. Beyond narrowly-centered studies, we assess how VLMs interpret identities in contextualized settings, revealing how models make trait and capability assumptions and exhibit patterns of discrimination. Drawing from social psychology, we examine how VLMs connect visual identity cues to trait and role-based inferences, encoding social hierarchies, through biased selections. Our findings uncover subtle, multifaceted, and surprising stereotypical patterns, offering insights into how VLMs construct social meaning from inputs.
arXiv.org Artificial Intelligence
May-30-2025
- Country:
- Africa > Guinea
- Kankan Region > Kankan Prefecture > Kankan (0.04)
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > Greece (0.04)
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Florida
- Miami-Dade County > Miami (0.04)
- Canada > Ontario
- Africa > Guinea
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Health & Medicine (0.68)
- Leisure & Entertainment > Sports
- Tennis (0.46)
- Technology: