Chain of LoRA: Efficient Fine-tuning of Language Models via Residual Learning
Xia, Wenhan, Qin, Chengwei, Hazan, Elad
–arXiv.org Artificial Intelligence
Fine-tuning is the primary methodology for tailoring pre-trained large language models to specific tasks. As the model's scale and the diversity of tasks expand, parameter-efficient fine-tuning methods are of paramount importance. One of the most widely used family of methods is low-rank adaptation (LoRA) and its variants. LoRA encodes weight update as the product of two low-rank matrices. Despite its advantages, LoRA falls short of full-parameter fine-tuning in terms of generalization error for certain tasks. We introduce Chain of LoRA (COLA), an iterative optimization framework inspired by the Frank-Wolfe algorithm, to bridge the gap between LoRA and full parameter fine-tuning, without incurring additional computational costs or memory overheads. COLA employs a residual learning procedure where it merges learned LoRA modules into the pre-trained language model parameters and re-initilize optimization for new born LoRA modules. We provide theoretical convergence guarantees as well as empirical results to validate the effectiveness of our algorithm. Across various models (OPT and llama-2) and seven benchmarking tasks, we demonstrate that COLA can consistently outperform LoRA without additional computational or memory costs.
arXiv.org Artificial Intelligence
Jan-8-2024
- Country:
- Europe
- Austria > Vienna (0.14)
- United Kingdom > Scotland (0.14)
- Europe
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Education (0.47)
- Technology: