Addressing Tokenization Inconsistency in Steganography and Watermarking Based on Large Language Models

Open in new window