Incremental Learning with Unlabeled Data in the Wild
Lee, Kibok, Lee, Kimin, Shin, Jinwoo, Lee, Honglak
Deep neural networks are known to suffer from catastrophic forgetting in class-incremental learning, where the performance on previous tasks drastically degrades when learning a new task. To alleviate this effect, we propose to leverage a continuous and large stream of unlabeled data in the wild. In particular, to leverage such transient external data effectively, we design a novel class-incremental learning scheme with (a) a new distillation loss, termed global distillation, (b) a learning strategy to avoid overfitting to the most recent task, and (c) a sampling strategy for the desired external data. Our experimental results on various datasets, including CIFAR and ImageNet, demonstrate the superiority of the proposed methods over prior methods, particularly when a stream of unlabeled data is accessible: we achieve up to 9.3% of relative performance improvement compared to the state-of-the-art method.
Mar-29-2019
- Country:
- Asia > South Korea
- North America
- Canada > Ontario
- Toronto (0.04)
- United States > Michigan
- Washtenaw County > Ann Arbor (0.14)
- Canada > Ontario
- Genre:
- Research Report (1.00)
- Technology: