Memory Constrained Dynamic Subnetwork Update for Transfer Learning
Quélennec, Aël, Mozharovskyi, Pavlo, Nguyen, Van-Tam, Tartaglione, Enzo
–arXiv.org Artificial Intelligence
On-device neural network training faces critical memory constraints that limit the adaptation of pre-trained models to downstream tasks. We present MeDyate, a theoretically-grounded framework for memory-constrained dynamic subnetwork adaptation. Our approach introduces two key innovations: LaRa (Layer Ranking), an improved layer importance metric that enables principled layer pre-selection, and a dynamic channel sampling strategy that exploits the temporal stability of channel importance distributions during fine-tuning. MeDyate dynamically resamples channels between epochs according to importance-weighted probabilities, ensuring comprehensive parameter space exploration while respecting strict memory budgets. Extensive evaluation across a large panel of tasks and architectures demonstrates that MeDyate achieves state-of-the-art performance under extreme memory constraints, consistently outperforming existing static and dynamic approaches while maintaining high computational efficiency. Our method represents a significant step towards enabling efficient on-device learning by demonstrating effective fine-tuning with memory budgets as low as a few hundred kB of RAM.
arXiv.org Artificial Intelligence
Oct-27-2025
- Country:
- Europe
- France (0.04)
- Switzerland > Zürich
- Zürich (0.14)
- North America > United States
- Washington > King County > Seattle (0.04)
- Europe
- Genre:
- Research Report > New Finding (0.93)
- Technology: