A Semantic Invariant Robust Watermark for Large Language Models
Liu, Aiwei, Pan, Leyi, Hu, Xuming, Meng, Shiao, Wen, Lijie
–arXiv.org Artificial Intelligence
Watermark algorithms for large language models (LLMs) have achieved extremely high accuracy in detecting text generated by LLMs. Such algorithms typically involve adding extra watermark logits to the LLM's logits at each generation step. However, prior algorithms face a trade-off between attack robustness and security robustness. This is because the watermark logits for a token are determined by a certain number of preceding tokens; a small number leads to low security robustness, while a large number results in insufficient attack robustness. In this work, we propose a semantic invariant watermarking method for LLMs that provides both attack robustness and security robustness. The watermark logits in our work are determined by the semantics of all preceding tokens. Specifically, we utilize another embedding LLM to generate semantic embeddings for all preceding tokens, and then these semantic embeddings are transformed into the watermark logits through our trained watermark model. Subsequent analyses and experiments demonstrated the attack robustness of our method in semantically invariant settings: synonym substitution and text paraphrasing settings. Finally, we also show that our watermark possesses adequate security robustness. As the quality of text generated by large language models (LLMs) continues to improve, it addresses a multitude of practical challenges on one hand, while simultaneously giving rise to a spectrum of new issues on the other. Therefore, the detection and labeling of machine-generated text have become extremely important. Text watermarking techniques for LLMs usually embed specific information during text generation to allow high-accuracy detection of LLM-generated text. The mainstream approach for embedding such information is to add extra watermark logits on top of the logits generated by the LLM. For example, Kirchenbauer et al. (2023) divide the vocabulary into red and green lists and increase the scores for the green tokens as the watermark logits.
arXiv.org Artificial Intelligence
Oct-10-2023
- Country:
- North America > United States > Hawaii (0.16)
- Genre:
- Research Report > New Finding (0.34)
- Industry:
- Technology: