DEAN: Deactivating the Coupled Neurons to Mitigate Fairness-Privacy Conflicts in Large Language Models

Qian, Chen, Liu, Dongrui, Zhang, Jie, Liu, Yong, Shao, Jing

arXiv.org Artificial Intelligence 

Ensuring awareness of fairness and privacy in Large Language Models (LLMs) is critical. Interestingly, we discover a counter-intuitive trade-off phenomenon that enhancing an LLM's privacy awareness through Supervised Fine-Tuning (SFT) methods significantly decreases its fairness awareness with thousands of samples. To address this issue, inspired by the information theory, we introduce a trainingfree method to DEActivate the fairness and privacy coupled Neurons (DEAN), which theoretically and empirically decrease the mutual information between fairness and privacy awareness. Extensive experimental results demonstrate that DEAN eliminates the trade-off phenomenon and significantly improves LLMs' fairness and privacy awareness simultaneously, e.g., improving Qwen-2-7B-Instruct's fairness awareness by 12.2% and privacy awareness by 14.0%. More crucially, DEAN remains robust and effective with limited annotated data or even when only malicious fine-tuning data is available, whereas SFT methods may fail to perform properly in such scenarios. We hope this study provides valuable insights into concurrently addressing fairness and privacy concerns in LLMs and can be integrated into comprehensive frameworks to develop more ethical and responsible AI systems. Our code is available at https://github.com/ChnQ/DEAN. In recent years, as LLMs increasingly permeate sensitive areas such as healthcare, finance, and education (Li et al., 2023b; Yuan et al., 2023; Al-Smadi, 2023), concerns regarding their fairness and privacy implications have become critically important (Liu et al., 2023; Sun et al., 2024a). For instance, when queried for sensitive information such as a social security number, we would expect the LLM to refuse to provide such information.