Delta-LoRA: Fine-Tuning High-Rank Parameters with the Delta of Low-Rank Matrices
Zi, Bojia, Qi, Xianbiao, Wang, Lingzhi, Wang, Jianan, Wong, Kam-Fai, Zhang, Lei
–arXiv.org Artificial Intelligence
In this paper, we present Delta-LoRA, which is a novel parameter-efficient approach to fine-tune large language models (LLMs). Such a strategy effectively addresses the limitation that the incremental update of low-rank matrices is inadequate for learning representations capable for downstream tasks. Moreover, as the update of W does not need to compute the gradients of W and store their momentums, Delta-LoRA shares comparable memory requirements and computational costs with LoRA. Extensive experiments show that Delta-LoRA significantly outperforms existing low-rank adaptation methods. We further support these results with comprehensive analyses that underscore the effectiveness of Delta-LoRA. Large Language Models (LLMs) recently have attracted considerable attention due to their remarkable performance across a broad spectrum of downstream tasks. Diverging from conventional Transformers characterized by a scale of millions of parameters, modern LLMs typically scale up to billions of parameters, endowing them with notable advantages such as emergent capabilities and robust generalization as detailed in (Bubeck et al., 2023). However, fine-tuning a LLM with all the learnable parameters (Full Fine-tuning) requires multiple GPUs with high memory demand (Dettmers et al., 2023; Hu et al., 2022), which is unattainable for many companies and research institutions.
arXiv.org Artificial Intelligence
Sep-5-2023