Differentially-private text generation degrades output language quality
–arXiv.org Artificial Intelligence
Ensuring user privacy by synthesizing data from large language models (LLMs) tuned under differential privacy (DP) has become popular recently. However, the impact of DP fine-tuned LLMs on the quality of the language and the utility of the texts they produce has not been investigated. In this work, we tune five LLMs with three corpora under four levels of privacy and assess the length, the grammatical correctness, and the lexical diversity of the text outputs they produce. We also probe the utility of the synthetic outputs in downstream classification tasks such as book genre recognition based on book descriptions and cause of death recognition based on verbal autopsies. The results indicate that LLMs tuned under stronger privacy constrains produce texts that are shorter by at least 77 %, that are less grammatically correct by at least 9 %, and are less diverse by at least 10 % in bi-gram diversity. Furthermore, the accuracy they reach in downstream classification tasks decreases, which might be detrimental to the usefulness of the generated synthetic data.
arXiv.org Artificial Intelligence
Sep-16-2025
- Country:
- Asia
- China > Beijing
- Beijing (0.04)
- Japan > Honshū
- Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Middle East > UAE
- Abu Dhabi Emirate > Abu Dhabi (0.04)
- China > Beijing
- Europe > Spain
- Catalonia > Barcelona Province > Barcelona (0.04)
- North America
- Canada
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- California
- Los Angeles County > Los Angeles (0.04)
- San Diego County > San Diego (0.04)
- New Mexico > Bernalillo County
- Albuquerque (0.04)
- Pennsylvania > Philadelphia County
- Philadelphia (0.04)
- Texas > Travis County
- Austin (0.04)
- California
- Asia
- Genre:
- Research Report (1.00)
- Industry:
- Technology: