A Stereotype Content Analysis on Color-related Social Bias in Large Vision Language Models
Choi, Junhyuk, Kim, Minju, Hong, Yeseon, Kim, Bugeun
–arXiv.org Artificial Intelligence
As large vision language models(LVLMs) rapidly advance, concerns about their potential to learn and generate social biases and stereotypes are increasing. Previous studies on LVLM's stereotypes face two primary limitations: metrics that overlooked the importance of content words, and datasets that overlooked the effect of color. To address these limitations, this study introduces new evaluation metrics based on the Stereotype Content Model (SCM). We also propose BASIC, a benchmark for assessing gender, race, and color stereotypes. Using SCM metrics and BASIC, we conduct a study with eight LVLMs to discover stereotypes. As a result, we found three findings. (1) The SCM-based evaluation is effective in capturing stereotypes. (2) LVLMs exhibit color stereotypes in the output along with gender and race ones. (3) Interaction between model architecture and parameter sizes seems to affect stereotypes. We release BASIC publicly on [anonymized for review].
arXiv.org Artificial Intelligence
May-28-2025
- Country:
- Asia
- Singapore (0.04)
- South Korea > Seoul
- Seoul (0.04)
- Europe
- North America
- Canada > Ontario
- Toronto (0.04)
- United States
- Florida > Miami-Dade County
- Miami (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Florida > Miami-Dade County
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (1.00)
- Leisure & Entertainment > Sports (0.45)
- Technology: