Enhancing LLM Watermark Resilience Against Both Scrubbing and Spoofing Attacks
Shen, Huanming, Huang, Baizhou, Wan, Xiaojun
–arXiv.org Artificial Intelligence
Watermarking is a promising defense against the misuse of large language models (LLMs), yet it remains vulnerable to scrubbing and spoofing attacks. This vulnerability stems from an inherent trade-off governed by watermark window size: smaller windows resist scrubbing better but are easier to reverse-engineer, enabling low-cost statistics-based spoofing attacks. This work breaks this trade-off by introducing a novel mechanism, equivalent texture keys, where multiple tokens within a watermark window can independently support the detection. Based on the redundancy, we propose a novel watermark scheme with Sub-vocabulary decomposed Equivalent tExture Key (SEEK). It achieves a Pareto improvement, increasing the resilience against scrubbing attacks without compromising robustness to spoofing. Experiments demonstrate SEEK's superiority over prior method, yielding spoofing robustness gains of +88.2%/+92.3%/+82.0% and scrubbing robustness gains of +10.2%/+6.4%/+24.6% across diverse dataset settings.
arXiv.org Artificial Intelligence
Dec-9-2025
- Country:
- Asia (0.93)
- Europe (0.67)
- North America > United States
- California (0.27)
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Industry:
- Information Technology > Security & Privacy (1.00)
- Leisure & Entertainment > Sports
- Football (0.45)
- Technology: