ReFT: Representation Finetuning for Language Models Zhengxuan Wu Zheng Wang Atticus Geiger
–Neural Information Processing Systems
Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15 -65 more parameter-efficient than LoRA.
Neural Information Processing Systems
Jun-2-2025, 12:53:40 GMT
- Country:
- Asia > Middle East (0.67)
- Europe (1.00)
- North America > United States
- California (0.45)
- Minnesota > Hennepin County
- Minneapolis (0.14)
- Genre:
- Research Report
- Experimental Study (0.92)
- New Finding (0.68)
- Research Report
- Technology: