Goto

Collaborating Authors

 detection dataset



Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection

Neural Information Processing Systems

A simple and effective way to improve long-tailed object detection (L TOD) is to use extra data to increase the training samples for tail classes. However, collecting bounding box annotations, especially for rare categories, is costly and tedious. Therefore, previous studies resort to datasets with image-level labels to enrich the amount of samples for rare classes by exploring image-level semantics (as shown in Figure 1 (a)). While appealing, directly learning from such data to benefit detection is challenging since they lack bounding box annotations that are essential for object detection.


Supplementary Material

Neural Information Processing Systems

The material provided in this document contains additional information relevant to the dataset. The authors have provided extra details about data collection, annotation clean-up pipeline and evaluation. Finally, we have also provided a datasheet for dataset. In this section, we provide additional details regarding the dataset collection process. The link to access the dataset Bucktales. A guide to use the DarkLabel annotation tool is provided with dataset on Edmond. It can also be searched on Edmond platform of the Max Planck Group. The link to annotation analysis code is here. The video recording was done at sunrise and sunset every day between 2-18 March 2023, in Tal Chhapar Wildlife Sanctuary, India. The images for the object detection dataset are selected from nine different days from this period. Peak activity on the lek occurred between 9-15 March 2023.




Fetch and Forge: Efficient Dataset Condensation for Object Detection

Neural Information Processing Systems

Dataset condensation (DC) is an emerging technique capable of creating compact synthetic datasets from large originals while maintaining considerable performance. It is crucial for accelerating network training and reducing data storage requirements. However, current research on DC mainly focuses on image classification, with less exploration of object detection.This is primarily due to two challenges: (i) the multitasking nature of object detection complicates the condensation process, and (ii) Object detection datasets are characterized by large-scale and high-resolution data, which are difficult for existing DC methods to handle.As a remedy, we propose DCOD, the first dataset condensation framework for object detection. It operates in two stages: Fetch and Forge, initially storing key localization and classification information into model parameters, and then reconstructing synthetic images via model inversion. For the complex of multiple objects in an image, we propose Foreground Background Decoupling to centrally update the foreground of multiple instances and Incremental PatchExpand to further enhance the diversity of foregrounds.Extensive experiments on various detection datasets demonstrate the superiority of DCOD. Even at an extremely low compression rate of 1\%, we achieve 46.4\% and 24.7\% $\text{AP}_{50}$ on the VOC and COCO, respectively, significantly reducing detector training duration.


DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection

Neural Information Processing Systems

Open-world object detection, as a more general and challenging goal, aims to recognize and localize objects described by arbitrary category names. The recent work GLIP formulates this problem as a grounding problem by concatenating all category names of detection datasets into sentences, which leads to inefficient interaction between category names. This paper presents DetCLIP, a paralleled visual-concept pre-training method for open-world detection by resorting to knowledge enrichment from a designed concept dictionary. To achieve better learning efficiency, we propose a novel paralleled concept formulation that extracts concepts separately to better utilize heterogeneous datasets (i.e., detection, grounding, and image-text pairs) for training. We further design a concept dictionary (with descriptions) from various online sources and detection datasets to provide prior knowledge for each concept. By enriching the concepts with their descriptions,we explicitly build the relationships among various concepts to facilitate the open-domain learning. The proposed concept dictionary is further used to provide sufficient negative concepts for the construction of the word-region alignment loss, and to complete labels for objects with missing descriptions in captions of image-text pair data. The proposed framework demonstrates strong zero-shot detection performances, e.g., on the LVIS dataset, our DetCLIP-T outperforms GLIP-T by 9.9% mAP and obtains a 13.5% improvement on rare categories compared to the fully-supervised model with the same backbone as ours.



Appendix for Dictionary Enriched Visual Concept Paralleled training for Open world Detection A Negative Impacts and Limitations

Neural Information Processing Systems

YFCC [11], we expect to extend our method to larger image-text pair datasets from the Internet. Region Proposal Network (RPN) pre-trained on Objects365 to extract object proposals. To alleviate the partial-label problem, we use concept names from our proposed concept dictionary (Sec.3.2) instead of the raw caption as the text input. Following CLIP, the prompt "a photo of a category." The explanation of each dataset can be found in the table caption.