MoTiC: Momentum Tightness and Contrast for Few-Shot Class-Incremental Learning
He, Zeyu, Huang, Shuai, Lu, Yuwu, Zhao, Ming
–arXiv.org Artificial Intelligence
Few-Shot Class-Incremental Learning (FSCIL) must contend with the dual challenge of learning new classes from scarce samples while preserving old class knowledge. Existing methods use the frozen feature extractor and class-averaged prototypes to mitigate against catastrophic forgetting and overfitting. However, new-class prototypes suffer significant estimation bias due to extreme data scarcity, whereas base-class prototypes benefit from sufficient data. In this work, we theoretically demonstrate that aligning the new-class priors with old-class statistics via Bayesian analysis reduces variance and improves prototype accuracy. Furthermore, we propose large-scale contrastive learning to enforce cross-category feature tightness. To further enrich feature diversity and inject prior information for new-class prototypes, we integrate momentum self-supervision and virtual categories into the Momentum Tightness and Contrast framework (MoTiC), constructing a feature space with rich representations and enhanced interclass cohesion. Experiments on three FSCIL benchmarks produce state-of-the-art performances, particularly on the fine-grained task CUB-200, validating our method's ability to reduce estimation bias and improve incremental learning robustness.
arXiv.org Artificial Intelligence
Sep-25-2025
- Country:
- Asia
- China (0.05)
- Middle East > Israel
- Tel Aviv District > Tel Aviv (0.04)
- Europe
- France > Île-de-France
- Germany > Bavaria
- Upper Bavaria > Munich (0.04)
- North America > United States
- California (0.04)
- Asia
- Genre:
- Research Report (1.00)
- Technology: