VLA-Mark: A cross modal watermark for large vision-language alignment model
Liu, Shuliang, Zheng, Qi, Xu, Jesse Jiaxi, Yan, Yibo, Zhang, Junyan, Geng, He, Liu, Aiwei, Jiang, Peijie, Liu, Jia, Tam, Yik-Cheung, Hu, Xuming
–arXiv.org Artificial Intelligence
Vision-language models demand watermarking solutions that protect intellectual property without compromising multimodal coherence. Existing text watermarking methods disrupt visual-textual alignment through biased token selection and static strategies, leaving semantic-critical concepts vulnerable. We propose VLA-Mark, a vision-aligned framework that embeds detectable watermarks while preserving semantic fidelity through cross-modal coordination. Our approach integrates multiscale visual-textual alignment metrics, combining localized patch affinity, global semantic coherence, and contextual attention patterns, to guide watermark injection without model retraining. An entropy-sensitive mechanism dynamically balances watermark strength and semantic preservation, prioritizing visual grounding during low-uncertainty generation phases. Experiments show 7.4% lower PPL and 26.6% higher BLEU than conventional methods, with near-perfect detection (98.8% AUC). The framework demonstrates 96.1\% attack resilience against attacks such as paraphrasing and synonym substitution, while maintaining text-visual consistency, establishing new standards for quality-preserving multimodal watermarking
arXiv.org Artificial Intelligence
Sep-22-2025
- Country:
- Asia
- China
- Guangdong Province > Guangzhou (0.04)
- Hong Kong (0.04)
- Shanghai > Shanghai (0.04)
- Middle East > Jordan (0.04)
- Myanmar > Tanintharyi Region
- Dawei (0.04)
- China
- North America > Canada
- Asia
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: