Harm Amplification in Text-to-Image Models
Hao, Susan, Shelby, Renee, Liu, Yuchi, Srinivasan, Hansa, Bhutani, Mukul, Ayan, Burcu Karagol, Poddar, Shivani, Laszlo, Sarah
–arXiv.org Artificial Intelligence
Warning: The content of this paper as well as some blurred images shown may include references to nudity, sexualization, violence, and gore. Text-to-image (T2I) models have emerged as a significant advancement in generative AI; however, there exist safety concerns regarding their potential to produce harmful image outputs even when users input seemingly safe prompts. This phenomenon, where T2I models generate harmful representations that were not explicit in the input, poses a potentially greater risk than adversarial prompts, leaving users unintentionally exposed to harms. Our paper addresses this issue by first introducing a formal definition for this phenomenon, termed harm amplification. We further contribute to the field by developing methodologies to quantify harm amplification in which we consider the harm of the model output in the context of user input. We then empirically examine how to apply these different methodologies to simulate real-world deployment scenarios including a quantification of disparate impacts across genders resulting from harm amplification. Together, our work aims to offer researchers tools to comprehensively address safety challenges in T2I systems and contribute to the responsible deployment of generative AI models.
arXiv.org Artificial Intelligence
Feb-1-2024
- Country:
- Europe (0.93)
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- New York > New York County
- New York City (0.15)
- California > San Francisco County
- Genre:
- Research Report > New Finding (0.67)
- Industry:
- Law > Civil Rights & Constitutional Law (1.00)
- Technology: