Sparsity Outperforms Low-Rank Projections in Few-Shot Adaptation
Mrabah, Nairouz, Richet, Nicolas, Ayed, Ismail Ben, Granger, Éric
–arXiv.org Artificial Intelligence
Adapting Vision-Language Models (VLMs) to new domains with few labeled samples remains a significant challenge due to severe overfitting and computational constraints. State-of-the-art solutions, such as low-rank reparameterization, mitigate these issues but often struggle with generalization and require extensive hyperparameter tuning. In this paper, a novel Sparse Optimization (SO) framework is proposed. Unlike low-rank approaches that typically constrain updates to a fixed subspace, our SO method leverages high sparsity to dynamically adjust very few parameters. We introduce two key paradigms. First, we advocate for \textit{local sparsity and global density}, which updates a minimal subset of parameters per iteration while maintaining overall model expressiveness. As a second paradigm, we advocate for \textit{local randomness and global importance}, which sparsifies the gradient using random selection while pruning the first moment based on importance. This combination significantly mitigates overfitting and ensures stable adaptation in low-data regimes. Extensive experiments on 11 diverse datasets show that SO achieves state-of-the-art few-shot adaptation performance while reducing memory overhead.
arXiv.org Artificial Intelligence
Aug-12-2025
- Country:
- Europe > Romania
- North America
- Canada > Quebec
- Montreal (0.04)
- United States > Virginia (0.04)
- Canada > Quebec
- Genre:
- Research Report > Promising Solution (0.34)
- Industry:
- Health & Medicine (0.45)
- Technology: