LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient Fine-Tuning of Large Language Models
Wu, Yichao, Xiang, Yafei, Huo, Shuning, Gong, Yulu, Liang, Penghao
–arXiv.org Artificial Intelligence
In addressing the computational and memory demands of fine-tuning LargeLanguage Models (LLMs), we propose LoRA-SP (Streamlined Partial Parameter Adaptation), a novel approach utilizing randomized half-selective parameter freezing within the Low-Rank Adaptation (LoRA) framework. This method efficiently balances pre-trained knowledge retention and adaptability for task-specific optimizations. Through a randomized mechanism, LoRA-SP determines which parameters to update or freeze, significantly reducing computational and memory requirements without compromising model performance. We evaluated LoRA-SP across several benchmark NLP tasks, demonstrating its ability to achieve competitive performance with substantially lower resource consumption compared totraditional full-parameter fine-tuning and other parameter-efficient techniques. LoRA-SP's innovative approach not only facilitates the deployment of advanced NLP models in resource-limited settings but also opens new research avenues intoeffective and efficient model adaptation strategies.
arXiv.org Artificial Intelligence
Feb-28-2024
- Country:
- Genre:
- Overview > Innovation (0.54)
- Research Report
- New Finding (0.46)
- Promising Solution (0.68)
- Industry:
- Health & Medicine (0.46)
- Technology: