SIBO: A Simple Booster for Parameter-Efficient Fine-Tuning
Wen, Zhihao, Zhang, Jie, Fang, Yuan
–arXiv.org Artificial Intelligence
Fine-tuning all parameters of large language models (LLMs) necessitates substantial computational power and extended time. Latest advancements in parameter-efficient fine-tuning (PEFT) techniques, such as Adapter tuning and LoRA, allow for adjustments to only a minor fraction of the parameters of these LLMs. Concurrently, it has been noted that the issue of over-smoothing diminishes the effectiveness of these Transformer-based LLMs, resulting in suboptimal performances in downstream tasks. In this paper, we present SIBO, which is a SImple BOoster to enhance PEFT, by injecting an initial residual. SIBO is straightforward and readily extensible to a range of state-of-the-art PEFT techniques to alleviate over-smoothing and enhance performance. Extensive experiments on 22 benchmark datasets demonstrate that SIBO significantly enhances the performance of various strong baselines, achieving up to 15.7% and 23.5% improvement over existing PEFT methods on the arithmetic and commonsense reasoning tasks, respectively.
arXiv.org Artificial Intelligence
Jun-1-2024
- Country:
- Asia > Singapore (0.04)
- Europe > Romania
- North America > United States (0.14)
- Genre:
- Research Report (0.64)
- Technology: