Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs)

Girrbach, Leander, Huang, Yiran, Alaniz, Stephan, Darrell, Trevor, Akata, Zeynep

arXiv.org Artificial Intelligence 

Pre-trained large language models (LLMs) have been reliably integrated with visual input for multimodal tasks. We study gender bias in 22 popular open-source VLAs with respect to personality traits, skills, and occupations. Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances. Similarly, they tend to attribute more skills and positive personality traits to women than to men, and we see a consistent tendency to associate negative personality traits with men. To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream task. We argue for pre-deploying gender bias assessment in VLAs and motivate further development of debiasing strategies to ensure equitable societal outcomes. Rapid progress in large language models (LLMs) has sparked a wave of innovation fusing visual encoding modules with LLMs, which ...