VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models

Meftah, Hanene F. Z. Brachemi, Hamidouche, Wassim, Fezza, Sid Ahmed, Déforges, Olivier

arXiv.org Artificial Intelligence 

--Recent years have witnessed remarkable progress in developing Vision-Language Models (VLMs) capable of processing both textual and visual inputs. These models have demonstrated impressive performance, leading to their widespread adoption in various applications. However, this widespread raises serious concerns regarding user privacy, particularly when models inadvertently process or expose private visual information. We propose a novel attack strategy that selectively conceals information within designated Region Of Interests (ROIs) in an image, effectively preventing VLMs from accessing sensitive content while preserving the semantic integrity of the remaining image. Unlike conventional adversarial attacks that often disrupt the entire image, our method maintains high coherence in unmasked areas. Experimental results across three state-of-the-art VLMs namely LLaV A, Instruct-BLIP, and BLIP2-T5 demonstrate up to 98% reduction in detecting targeted ROIs, while maintaining global image semantics intact, as confirmed by high similarity scores between clean and adversarial outputs. We believe that this work contributes to a more privacy-conscious use of multimodal models and offers a practical tool for further research, with the source code publicly available at https://github.com/hbrachemi/Vlm Vision-Language Models (VLMs) have emerged as a powerful paradigm in artificial intelligence, seamlessly integrating visual and textual information to achieve remarkable performance in various tasks such as image captioning, visual question answering, and document understanding [1]-[5]. This led to their rapid adoption in numerous applications, including content creation, customer service, and information retrieval. However, the widespread use of VLMs raises critical concerns regarding the privacy and security of user data.