Low-Rank Adaptation of Neural Fields
Truong, Anh, Mahmoud, Ahmed H., Luković, Mina Konaković, Solomon, Justin
–arXiv.org Artificial Intelligence
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields -- neural network parameterizations of visual or physical functions -- has received less attention. We propose a parameter-efficient strategy for updating neural fields using low-rank adaptations (LoRA). LoRA, a method from the parameter-efficient fine-tuning LLM community, encodes small updates to pre-trained models with minimal computational overhead. We adapt LoRA for instance-specific neural fields, avoiding the need for large pre-trained models and yielding lightweight updates. We validate our approach with experiments in image filtering, geometry editing, video compression, and energy-based editing, demonstrating its effectiveness and versatility for representing neural field updates.
arXiv.org Artificial Intelligence
Oct-20-2025
- Country:
- Asia
- China > Hong Kong (0.07)
- Middle East > Israel
- Tel Aviv District > Tel Aviv (0.04)
- Europe
- Italy > Lombardy
- Milan (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- Switzerland (0.04)
- Italy > Lombardy
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- Montana > Roosevelt County (0.04)
- New York > New York County
- New York City (0.05)
- California > Los Angeles County
- Oceania > Australia
- New South Wales > Sydney (0.04)
- Asia
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology (0.46)
- Technology: