Revisiting Privacy, Utility, and Efficiency Trade-offs when Fine-Tuning Large Language Models
Das, Soumi, Kolling, Camila, Khan, Mohammad Aflah, Amani, Mahsa, Ghosh, Bishwamittra, Wu, Qinyuan, Speicher, Till, Gummadi, Krishna P.
–arXiv.org Artificial Intelligence
We study the inherent trade-offs in minimizing privacy risks and maximizing utility, while maintaining high computational efficiency, when fine-tuning large language models (LLMs). A number of recent works in privacy research have attempted to mitigate privacy risks posed by memorizing fine-tuning data by using differentially private training methods (e.g., DP), albeit at a significantly higher computational cost (inefficiency). In parallel, several works in systems research have focussed on developing (parameter) efficient fine-tuning methods (e.g., LoRA), but few works, if any, investigated whether such efficient methods enhance or diminish privacy risks. In this paper, we investigate this gap and arrive at a surprising conclusion: efficient fine-tuning methods like LoRA mitigate privacy risks similar to private fine-tuning methods like DP. Our empirical finding directly contradicts prevailing wisdom that privacy and efficiency objectives are at odds during fine-tuning. Our finding is established by (a) carefully defining measures of privacy and utility that distinguish between memorizing sensitive and non-sensitive tokens in training and test datasets used in fine-tuning and (b) extensive evaluations using multiple open-source language models from Pythia, Gemma, and Llama families and different domain-specific datasets.
arXiv.org Artificial Intelligence
Feb-18-2025
- Country:
- Asia > Middle East
- UAE (0.14)
- Europe (0.46)
- North America > United States (0.46)
- Asia > Middle East
- Genre:
- Research Report > New Finding (0.66)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Technology: