Low-Rank Adaptation of Neural Fields
Truong, Anh, Mahmoud, Ahmed H., Luković, Mina Konaković, Solomon, Justin
–arXiv.org Artificial Intelligence
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields -- neural network parameterizations of visual or physical functions -- has received less attention. We propose a parameter-efficient strategy for updating neural fields using low-rank adaptations (LoRA). LoRA, a method from the parameter-efficient fine-tuning LLM community, encodes small updates to pre-trained models with minimal computational overhead. We adapt LoRA for instance-specific neural fields, avoiding the need for large pre-trained models and yielding lightweight updates. We validate our approach with experiments in image filtering, geometry editing, video compression, and energy-based editing, demonstrating its effectiveness and versatility for representing neural field updates.
arXiv.org Artificial Intelligence
Oct-20-2025
- Country:
- Europe (0.67)
- North America > United States
- California (0.28)
- Massachusetts > Middlesex County
- Cambridge (0.14)
- Genre:
- Research Report (0.82)
- Industry:
- Information Technology (0.46)
- Technology: