Beyond Higher Rank: Token-wise Input-Output Projections for Efficient Low-Rank Adaptation
Li, Shiwei, Luo, Xiandi, Wang, Haozhao, Tang, Xing, Cui, Ziqiang, Liu, Dugang, Li, Yuhua, He, Xiuqiang, Li, Ruixuan
–arXiv.org Artificial Intelligence
Low-rank adaptation (LoRA) is a parameter-efficient fine-tuning (PEFT) method widely used in large language models (LLMs). LoRA essentially describes the projection of an input space into a low-dimensional output space, with the dimensionality determined by the LoRA rank. In standard LoRA, all input tokens share the same weights and undergo an identical input-output projection. This limits LoRA's ability to capture token-specific information due to the inherent semantic differences among tokens. To address this limitation, we propose Token-wise Projected Low-Rank Adaptation (TopLoRA), which dynamically adjusts LoRA weights according to the input token, thereby learning token-wise input-output projections in an end-to-end manner. Formally, the weights of TopLoRA can be expressed as $BΣ_X A$, where $A$ and $B$ are low-rank matrices (as in standard LoRA), and $Σ_X$ is a diagonal matrix generated from each input token $X$. Notably, TopLoRA does not increase the rank of LoRA weights but achieves more granular adaptation by learning token-wise LoRA weights (i.e., token-wise input-output projections). Extensive experiments across multiple models and datasets demonstrate that TopLoRA consistently outperforms LoRA and its variants. The code is available at https://github.com/Leopold1423/toplora-neurips25.
arXiv.org Artificial Intelligence
Oct-28-2025
- Country:
- Asia > China
- Guangdong Province > Shenzhen (0.04)
- Hong Kong (0.04)
- Hubei Province (0.04)
- Europe > Romania
- Asia > China
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (0.93)
- Research Report
- Technology: