Goto

Collaborating Authors

 unsupervised object-level representation learning


Unsupervised Object-Level Representation Learning from Scene Images

Neural Information Processing Systems

Contrastive self-supervised learning has largely narrowed the gap to supervised pre-training on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, i.e., different augmented views of the same image correspond to the same object. Such a heavily curated constraint becomes immediately infeasible when pre-trained on more complex scene images with many objects. To overcome this limitation, we introduce Object-level Representation Learning (ORL), a new self-supervised learning framework towards scene images. Our key insight is to leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence, thus realizing object-level representation learning from scene images. Extensive experiments on COCO show that ORL significantly improves the performance of self-supervised learning on scene images, even surpassing supervised ImageNet pre-training on several downstream tasks. Furthermore, ORL improves the downstream performance when more unlabeled scene images are available, demonstrating its great potential of harnessing unlabeled data in the wild. We hope our approach can motivate future research on more general-purpose unsupervised representation learning from scene data.

  name change, scene image, unsupervised object-level representation learning, (3 more...)

Unsupervised Object-Level Representation Learning from Scene Images Supplementary Material

Neural Information Processing Systems

The results are averaged across five independent runs. The learning rate is decayed by 0.2 at 12 and 16 epochs. The learning rate is initialized as 0.02 with a linear warmup for The implementation details of our most essential image-level baseline, i.e., BYOL [5], are provided Our reproduced results vs. existing results for BYOL. All are based on 800-epoch pre-training on COCO with ResNet-50. Figure 2 visualizes more attention maps generated by BYOL and ORL.

  artificial intelligence, machine learning, unsupervised object-level representation learning, (11 more...)
  Country:

Unsupervised Object-Level Representation Learning from Scene Images

Neural Information Processing Systems

Contrastive self-supervised learning has largely narrowed the gap to supervised pre-training on ImageNet. However, its success highly relies on the object-centric priors of ImageNet, i.e., different augmented views of the same image correspond to the same object. Such a heavily curated constraint becomes immediately infeasible when pre-trained on more complex scene images with many objects. To overcome this limitation, we introduce Object-level Representation Learning (ORL), a new self-supervised learning framework towards scene images. Our key insight is to leverage image-level self-supervised pre-training as the prior to discover object-level semantic correspondence, thus realizing object-level representation learning from scene images.