Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
Farn, Hua, Su, Hsuan, Kumar, Shachi H, Sahay, Saurav, Chen, Shang-Tse, Lee, Hung-yi
–arXiv.org Artificial Intelligence
Fine-tuning large language models (LLMs) for downstream tasks is a widely adopted approach, but it often leads to safety degradation in safety-aligned LLMs. Currently, many solutions address this issue by incorporating additional safety data, which can be impractical in many cases. In this paper, we address the question: How can we improve downstream task performance while preserving safety in LLMs without relying on additional safety data? We propose a simple and effective method that maintains the inherent safety of LLMs while enhancing their downstream task performance: merging the weights of pre- and post-fine-tuned safety-aligned models. Experimental results across various downstream tasks, models, and merging methods demonstrate that this approach effectively mitigates safety degradation while improving downstream task performance, offering a practical solution for adapting safety-aligned LLMs.
arXiv.org Artificial Intelligence
Dec-27-2024
- Country:
- Asia > Middle East (0.46)
- North America (0.68)
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.46)
- Technology: