BoRA: Bi-dimensional Weight-Decomposed Low-Rank Adaptation
Wang, Qiushi, Fan, Yuchen, Bao, Junwei, Jiang, Hongfei, Song, Yang
–arXiv.org Artificial Intelligence
In recent years, Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) have significantly enhanced the adaptability of large-scale pre-trained models. Weight-Decomposed Low-Rank Adaptation (DoRA) improves upon LoRA by separating the magnitude and direction components of the weight matrix, leading to superior performance. However, DoRA's improvements are limited to the vertical dimension, resulting in an asymmetrical pattern between horizontal and vertical dimensions. This paper introduces BoRA, an innovative extension of LoRA and DoRA, characterized by symmetrical properties across horizontal and vertical dimensions. Our approach optimizes the weight matrix symmetrically Figure 1: Structure of BoRA: blue indicates frozen parameters, by adjusting both column-wise and green indicates trainable parameters.
arXiv.org Artificial Intelligence
Dec-9-2024