SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection
Shen, Han, Chen, Pin-Yu, Das, Payel, Chen, Tianyi
–arXiv.org Artificial Intelligence
Fine-tuning on task-specific data to boost downstream performance is a crucial step for leveraging Large Language Models (LLMs). However, previous studies have demonstrated that fine-tuning the models on several adversarial samples or even benign data can greatly comprise the model's pre-equipped alignment and safety capabilities. In this work, we propose SEAL, a novel framework to enhance safety in LLM fine-tuning. SEAL learns a data ranker based on the bilevel optimization to up rank the safe and high-quality fine-tuning data and down rank the unsafe or low-quality ones. Models trained with SEAL demonstrate superior quality over multiple baselines, with 8.5% and 9.7% win rate increase compared to random selection respectively on Llama-3-8b-Instruct and Merlinite-7b models. Our code is available on github https://github.com/hanshen95/SEAL.
arXiv.org Artificial Intelligence
Oct-10-2024
- Country:
- Europe > United Kingdom
- England (0.28)
- North America > United States (0.28)
- South America > Colombia (0.28)
- Europe > United Kingdom
- Genre:
- Research Report > Promising Solution (0.48)
- Industry:
- Information Technology > Security & Privacy (0.67)
- Technology: