LoRA-GA: Low-Rank Adaptation with Gradient Approximation
–Neural Information Processing Systems
Fine-tuning large-scale pretrained models is prohibitively expensive in terms of computational and memory costs. LoRA, as one of the most popular Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective alternative by fine-tuning an auxiliary low-rank model that has significantly fewer parameters.
Neural Information Processing Systems
Feb-15-2026, 10:41:02 GMT
- Genre:
- Research Report
- Experimental Study (0.93)
- New Finding (1.00)
- Research Report
- Technology: