CTR-LoRA: Curvature-Aware and Trust-Region Guided Low-Rank Adaptation for Large Language Models
Wang, Zhuxuanzi, Mo, Mingqiao, Xiao, Xi, Liu, Chen, Ma, Chenrui, Zhang, Yunbei, Wang, Xiao, Krishnaswamy, Smita, Wang, Tianyang
–arXiv.org Artificial Intelligence
Parameter-efficient fine-tuning (PEFT) has become the standard approach for adapting large language models under limited compute and memory budgets. Although previous methods improve efficiency through low-rank updates, quantization, or heuristic budget reallocation, they often decouple the allocation of capacity from the way updates evolve during training. In this work, we introduce CTR-LoRA, a framework guided by curvature trust region that integrates rank scheduling with stability-aware optimization. CTR-LoRA allocates parameters based on marginal utility derived from lightweight second-order proxies and constrains updates using a Fisher/Hessian-metric trust region. Experiments on multiple open-source backbones (7B-13B), evaluated on both in-distribution and out-of-distribution benchmarks, show consistent improvements over strong PEFT baselines. In addition to increased accuracy, CTR-LoRA enhances training stability, reduces memory requirements, and achieves higher throughput, positioning it on the Pareto frontier of performance and efficiency. These results highlight a principled path toward more robust and deployable PEFT.
arXiv.org Artificial Intelligence
Oct-21-2025
- Country:
- North America > United States
- Alabama (0.04)
- California > Orange County
- Irvine (0.04)
- North America > United States
- Genre:
- Research Report (0.82)
- Industry:
- Technology: