Goto

Collaborating Authors

 base class






Overleaf Example

Neural Information Processing Systems

Deep networks have shown remarkable results in the task of object detection. However, their performance suffers critical drops when they are subsequently trained on novel classes without any sample from the base classes originally used to train the model. This phenomenon is known as catastrophic forgetting. Recently, several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection. Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novelclasses. This requirement isimpractical in manyreal-world settings since the base classes do not necessarily co-occur with the novel classes.



Mining

Neural Information Processing Systems

For class incremental semanticsegmentation, suchaphenomenon oftenbecomesmuchworseduetothe background shift,i.e., some concepts learned at previous stages are assigned to the background class at the current training stage, therefore, significantly reducing the performance of these old concepts. To address this issue, we propose a simple yet effective method in this paper, namedMining unseenClasses via RegionalObjectness forSegmentation (MicroSeg).


ASurprisinglySimpleApproachto GeneralizedFew-ShotSemanticSegmentation

Neural Information Processing Systems

Inthis paper,wepropose asimple yet effectivemethod for GFSS that does not use the techniques mentioned above. Also, wetheoretically show that our method perfectly maintains the segmentation performance of the base-class modelovermostofthebaseclasses. Through numerical experiments, we demonstrated the effectiveness of our method. It improved in novel-class segmentation performance in the1-shot scenario by6.1% on the PASCAL-5i dataset,4.7%on