base class
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Asia > Middle East > Israel (0.04)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > Maryland > Baltimore (0.04)
- (17 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.67)
Overleaf Example
Deep networks have shown remarkable results in the task of object detection. However, their performance suffers critical drops when they are subsequently trained on novel classes without any sample from the base classes originally used to train the model. This phenomenon is known as catastrophic forgetting. Recently, several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection. Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novelclasses. This requirement isimpractical in manyreal-world settings since the base classes do not necessarily co-occur with the novel classes.
- Asia > China > Jiangsu Province > Yancheng (0.05)
- Asia > Myanmar > Tanintharyi Region > Dawei (0.04)
Mining
For class incremental semanticsegmentation, suchaphenomenon oftenbecomesmuchworseduetothe background shift,i.e., some concepts learned at previous stages are assigned to the background class at the current training stage, therefore, significantly reducing the performance of these old concepts. To address this issue, we propose a simple yet effective method in this paper, namedMining unseenClasses via RegionalObjectness forSegmentation (MicroSeg).
ASurprisinglySimpleApproachto GeneralizedFew-ShotSemanticSegmentation
Inthis paper,wepropose asimple yet effectivemethod for GFSS that does not use the techniques mentioned above. Also, wetheoretically show that our method perfectly maintains the segmentation performance of the base-class modelovermostofthebaseclasses. Through numerical experiments, we demonstrated the effectiveness of our method. It improved in novel-class segmentation performance in the1-shot scenario by6.1% on the PASCAL-5i dataset,4.7%on