incremental few-shot learning
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
OvercomingCatastrophicForgettinginIncremental Few-ShotLearningbyFindingFlatMinima
This paper considers incremental few-shot learning, which requires a model to continually recognize new categories with only a few examples provided. Our study shows that existing methods severely suffer from catastrophic forgetting, awell-known problem in incremental learning, which is aggravated due to data scarcity andimbalance inthefew-shot setting.
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Incremental Few-Shot Learning with Attention Attractor Networks
Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters. We demonstrate that the learned attractor network can help recognize novel classes while remembering old classes without the need to review the original training set, outperforming various baselines.
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
This paper considers incremental few-shot learning, which requires a model to continually recognize new categories with only a few examples provided. Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting. Our analysis further suggests that to prevent catastrophic forgetting, actions need to be taken in the primitive stage -- the training of base classes instead of later few-shot learning sessions. Therefore, we propose to search for flat local minima of the base training objective function and then fine-tune the model parameters within the flat region on new tasks. In this way, the model can efficiently learn new classes while preserving the old ones. Comprehensive experimental results demonstrate that our approach outperforms all prior state-of-the-art methods and is very close to the approximate upper bound.
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- North America > United States > California > Santa Cruz County > Santa Cruz (0.04)
- (2 more...)
- Research Report > Promising Solution (0.46)
- Research Report > New Finding (0.46)
Incremental Few-Shot Learning with Attention Attractor Networks
Mengye Ren, Renjie Liao, Ethan Fetaya, Richard Zemel
After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes. In each episode, we train a set of new weights to recognize novel classes until they converge, and we show that the technique of recurrent back-propagation can back-propagate through the optimization process and facilitate the learning of these parameters.
- North America > United States (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Government (0.46)
- Education (0.46)
Reviews: Incremental Few-Shot Learning with Attention Attractor Networks
In terms of originality I believe the proposed method to be sufficiently novel and at no point felt this was merely an incremental improvement. In terms of significance it should be said that I feel this idea to be fairly specific to the incremental classification setting and wouldn't be general enough to be directly applicable in another domain (e.g. However, I still believe this work should be accepted and would expect recognition within the domain. With regards to the clarity of the submission, I believe sections 3.1 and 3.2 could be improved. Detailed comments below: Introduction: L36: "We optimize a regularizer that reduces catastrophic forgetting" - Perhaps it would be a good idea to delineate this from many of the other works on regularization-based methods to reduce catastrophic forgetting where the regularizer isn't learnt?
Reviews: Incremental Few-Shot Learning with Attention Attractor Networks
The authors proposed a new attention attractor for incremental few-shot learning where base classifier is trained offline with enough number of data and additional extra novel classes are added later, each with only a few labeled examples. The setting is important and interesting. The idea is novel and results are overall quite strong. There are some concerns regarding the clarity; this should be revised in the final version.
Incremental Few-Shot Learning with Attention Attractor Networks
Machine learning classifiers are often trained to recognize a set of pre-defined classes. However, in many applications, it is often desirable to have the flexibility of learning additional concepts, with limited data and without re-training on the full training set. This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples. After learning the novel classes, the model is then evaluated on the overall classification performance on both base and novel classes. To this end, we propose a meta-learning model, the Attention Attractor Network, which regularizes the learning of novel classes.
Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima
This paper considers incremental few-shot learning, which requires a model to continually recognize new categories with only a few examples provided. Our study shows that existing methods severely suffer from catastrophic forgetting, a well-known problem in incremental learning, which is aggravated due to data scarcity and imbalance in the few-shot setting. Our analysis further suggests that to prevent catastrophic forgetting, actions need to be taken in the primitive stage -- the training of base classes instead of later few-shot learning sessions. Therefore, we propose to search for flat local minima of the base training objective function and then fine-tune the model parameters within the flat region on new tasks. In this way, the model can efficiently learn new classes while preserving the old ones.