LoRMA: Low-Rank Multiplicative Adaptation for LLMs
Bihany, Harsh, Patel, Shubham, Modi, Ashutosh
–arXiv.org Artificial Intelligence
Large Language Models have shown remarkable capabilities in the NLP domain. Their effectiveness can mainly be attributed to their ability to adapt to an array of downstream tasks. However, generally, full fine-tuning is a computationally expensive job. To mitigate this, many techniques have been developed that prime efficiency, a prominent one being Low-Rank Adaptation (LoRA). However, LoRA and its variants employ re-parametrized additive updates. In this paper, we propose Low-Rank Multiplicative Adaptation (LoRMA), which shifts the paradigm of additive updates to a richer space of matrix multiplicative transformations. We tackle challenges such as computational complexity and rank bottleneck of matrix multiplication by effectively re-ordering operations and introducing rank inflation strategies. We conduct extensive experiments to demonstrate the effectiveness of our approach in terms of various evaluation metrics.
arXiv.org Artificial Intelligence
Jun-10-2025
- Country:
- Asia
- Middle East > Saudi Arabia
- Asir Province > Abha (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > Saudi Arabia
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Croatia > Dubrovnik-Neretva County
- Dubrovnik (0.04)
- Germany > Saarland
- Saarbrücken (0.04)
- Ireland > Leinster
- County Dublin > Dublin (0.04)
- Romania > Sud - Muntenia Development Region
- Giurgiu County > Giurgiu (0.04)
- Spain > Galicia
- A Coruña Province > Santiago de Compostela (0.04)
- Belgium > Brussels-Capital Region
- North America
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Florida > Miami-Dade County
- Miami (0.04)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Washington > King County
- Seattle (0.04)
- Florida > Miami-Dade County
- Mexico > Mexico City
- Oceania > Australia
- South America > Chile
- Asia
- Genre:
- Research Report (1.00)
- Technology: