Goto

Collaborating Authors

 photoguard


Disrupting Diffusion-based Inpainters with Semantic Digression

Son, Geonho, Lee, Juhun, Woo, Simon S.

arXiv.org Artificial Intelligence

The fabrication of visual misinformation on the web and social media has increased exponentially with the advent of foundational text-to-image diffusion models. Namely, Stable Diffusion inpainters allow the synthesis of maliciously inpainted images of personal and private figures, and copyrighted contents, also known as deepfakes. To combat such generations, a disruption framework, namely Photoguard, has been proposed, where it adds adversarial noise to the context image to disrupt their inpainting synthesis. While their framework suggested a diffusion-friendly approach, the disruption is not sufficiently strong and it requires a significant amount of GPU and time to immunize the context image. In our work, we re-examine both the minimal and favorable conditions for a successful inpainting disruption, proposing DDD, a "Digression guided Diffusion Disruption" framework. First, we identify the most adversarially vulnerable diffusion timestep range with respect to the hidden space. Within this scope of noised manifold, we pose the problem as a semantic digression optimization. We maximize the distance between the inpainting instance's hidden states and a semantic-aware hidden state centroid, calibrated both by Monte Carlo sampling of hidden states and a discretely projected optimization in the token space. Effectively, our approach achieves stronger disruption and a higher success rate than Photoguard while lowering the GPU memory requirement, and speeding the optimization up to three times faster.


PRIME: Protect Your Videos From Malicious Editing

Li, Guanlin, Yang, Shuai, Zhang, Jie, Zhang, Tianwei

arXiv.org Artificial Intelligence

With the development of generative models, the quality of generated content keeps increasing. Recently, open-source models have made it surprisingly easy to manipulate and edit photos and videos, with just a few simple prompts. While these cutting-edge technologies have gained popularity, they have also given rise to concerns regarding the privacy and portrait rights of individuals. Malicious users can exploit these tools for deceptive or illegal purposes. Although some previous works focus on protecting photos against generative models, we find there are still gaps between protecting videos and images in the aspects of efficiency and effectiveness. Therefore, we introduce our protection method, PRIME, to significantly reduce the time cost and improve the protection performance. Moreover, to evaluate our proposed protection method, we consider both objective metrics and human subjective metrics. Our evaluation results indicate that PRIME only costs 8.3% GPU hours of the cost of the previous state-of-the-art method and achieves better protection results on both human evaluation and objective metrics. Code can be found in https://github.com/GuanlinLee/prime.


These new tools could help protect our pictures from AI

MIT Technology Review

While nonconsensual deepfake porn has been used to torment women for years, the latest generation of AI makes it an even bigger problem. These systems are much easier to use than previous deepfake tech, and they can generate images that look completely convincing. Image-to-image AI systems, which allow people to edit existing images using generative AI, "can be very high quality … because it's basically based off of an existing single high-res image," Ben Zhao, a computer science professor at the University of Chicago, tells me. "The result that comes out of it is the same quality, has the same resolution, has the same level of details, because oftentimes [the AI system] is just moving things around." You can imagine my relief when I learned about a new tool that could help people protect their images from AI manipulation.


The Download: protecting photos from AI, and air-conditioning's dilemma

MIT Technology Review

There's currently nothing stopping someone taking the selfie you posted online last week and editing it using powerful generative AI systems. Even worse, it might be impossible to prove that the resulting image is fake. The good news is that a new tool, created by researchers at MIT, could prevent this. The tool, called PhotoGuard, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated. If someone tries to use an editing app based on a generative AI model to manipulate an image that has been "immunized" by PhotoGuard, the result will look unrealistic or warped.


This new tool could protect your pictures from AI manipulation

MIT Technology Review

The tool, called PhotoGuard, works like a protective shield by altering photos in tiny ways that are invisible to the human eye but prevent them from being manipulated. If someone tries to use an editing app based on a generative AI model such as Stable Diffusion to manipulate an image that has been "immunized" by PhotoGuard, the result will look unrealistic or warped. Right now, "anyone can take our image, modify it however they want, put us in very bad-looking situations, and blackmail us," says Hadi Salman, a PhD researcher at MIT who contributed to the research. It was presented at the International Conference on Machine Learning this week. PhotoGuard is "an attempt to solve the problem of our images being manipulated maliciously by these models," says Salman.