Gender Bias in Text-to-Video Generation Models: A case study of Sora
Nadeem, Mohammad, Sohail, Shahab Saquib, Cambria, Erik, Schuller, Björn W., Hussain, Amir
–arXiv.org Artificial Intelligence
The advent of AI-generated content (AIGC) has spurred extensive scholarly research and revolutionized industries such as content generation [3,4], medical imaging [5,6], etc. Significant milestones, such as OpenAI's release of ChatGPT in 2023, have propelled the field toward the ambitious goal of Artificial General Intelligence (AGI). Among major Generative AI tools, Text-to-video (T2V) generation models have gained immense popularity due to their ability to create visually compelling and contextually accurate videos from textual descriptions [7]. Leveraging breakthroughs in Generative AI, T2V models like OpenAI's Sora [8] have showcased unprecedented capabilities in blending textual input with dynamic video output, transforming visual storytelling, advertising, and content creation. Generative AI models often inherit and amplify social biases and stereotypes embedded in their training data [9,10]. The training data, sourced from diverse and extensive internet repositories, frequently reflects cultural prejudices, societal inequities, and skewed portrayals of different demographics [15].
arXiv.org Artificial Intelligence
Jan-10-2025
- Country:
- Asia
- India
- Madhya Pradesh > Bhopal (0.04)
- Uttar Pradesh > Aligarh (0.05)
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- Singapore (0.04)
- India
- Europe
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- United Kingdom
- England > Greater London
- London (0.04)
- Scotland (0.04)
- England > Greater London
- Germany > Bavaria
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (0.49)
- Technology: