Identifying Implicit Social Biases in Vision-Language Models
Hamidieh, Kimia, Zhang, Haoran, Gerych, Walter, Hartvigsen, Thomas, Ghassemi, Marzyeh
–arXiv.org Artificial Intelligence
Vision-language models, like CLIP (Contrastive Language Image Pretraining), are becoming increasingly popular for a wide range of multimodal retrieval tasks. However, prior work has shown that large language and deep vision models can learn historical biases contained in their training sets, leading to perpetuation of stereotypes and potential downstream harm. In this work, we conduct a systematic analysis of the social biases that are present in CLIP, with a focus on the interaction between image and text modalities. We first propose a taxonomy of social biases called So-B-IT, which contains 374 words categorized across ten types of bias. Each type can lead to societal harm if associated with a particular demographic group. Using this taxonomy, we examine images retrieved by CLIP from a facial image dataset using each word as part of a prompt. We find that CLIP frequently displays undesirable associations between harmful words and specific demographic groups, such as retrieving mostly pictures of Middle Eastern men when asked to retrieve images of a "terrorist". Finally, we conduct an analysis of the source of such biases, by showing that the same harmful stereotypes are also present in a large image-text dataset used to train CLIP models for examples of biases that we find. Our findings highlight the importance of evaluating and addressing bias in vision-language models, and suggest the need for transparency and fairness-aware curation of large pre-training datasets.
arXiv.org Artificial Intelligence
Nov-1-2024
- Country:
- Europe (1.00)
- North America > United States (0.93)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Education (1.00)
- Government (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology (0.46)
- Law
- Civil Rights & Constitutional Law (0.92)
- Criminal Law (0.68)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Leisure & Entertainment (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.67)
- Performance Analysis > Accuracy (0.46)
- Natural Language (1.00)
- Vision > Face Recognition (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence