Adversarial Attacks on Image Generation With Made-Up Words
–arXiv.org Artificial Intelligence
Text-guided image generation models have made impressive strides in recent years. State-of-the-art models, like DALL-E 2 [1], Imagen [2], and Parti [3], can generate coherent images matching a remarkably wide variety of prompts in virtually any visual domain and style. While the ability to generate high-quality images of any subject is an exciting development for content creation, it also raises ethical questions about potential misuse of this technology. In particular, text-guided image generation models may be used to produce fake imagery of existing individuals for misinformation (so-called "deepfakes" [4]), or produce visual content deemed offensive or harmful. These concerns have been used to justify the decision to limit access to large text-guided image generation models, as well as moderate their use according to content policies implemented in prompt filters.
arXiv.org Artificial Intelligence
Aug-4-2022