bridging visual representation
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Existing object detection frameworks are usually built on a single format of object/part representation, i.e., anchor/proposal rectangle boxes in RetinaNet and Faster R-CNN, center points in FCOS and RepPoints, and corner points in CornerNet. While these different representations usually drive the frameworks to perform well in different aspects, e.g., better classification or finer localization, it is in general difficult to combine these representations in a single framework to make good use of each strength, due to the heterogeneous or non-grid feature extraction by different representations.
- Information Technology > Artificial Intelligence (0.60)
- Information Technology > Data Science (0.41)
- Asia (0.04)
- North America > Canada (0.04)
Review for NeurIPS paper: RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Weaknesses: - While the overall performance is strong, the reviewer is not excited about the technical novelty. The good performance feels like from putting existing output modalities together. Training cornernet/ centernet in an FPN structure is new, but this part is not well explained in the paper: what is the training loss for the point head? Is that the CornerNet-style focal loss or standard cross-entropy? How to assign different points to different FPN levels?
- Information Technology > Artificial Intelligence > Vision (0.44)
- Information Technology > Data Science (0.40)
Review for NeurIPS paper: RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Good work on analyzing pros and cons of various object representations, as well as a neat way to combine them into a single framework that gives good gains on the COCO benchmark. The proposed solution of using a self-attention module to bridge the representations is both original, simple and widely-applicable. I think the method and the work reveal intriguing differences between the various representations and this will be useful to the community. The authors should adapt the camera ready in accordance to the post-rebuttal comments from the reviewers (esp.
- Information Technology > Data Science (0.40)
- Information Technology > Artificial Intelligence > Vision (0.40)
RelationNet++: Bridging Visual Representations for Object Detection via Transformer Decoder
Existing object detection frameworks are usually built on a single format of object/part representation, i.e., anchor/proposal rectangle boxes in RetinaNet and Faster R-CNN, center points in FCOS and RepPoints, and corner points in CornerNet. While these different representations usually drive the frameworks to perform well in different aspects, e.g., better classification or finer localization, it is in general difficult to combine these representations in a single framework to make good use of each strength, due to the heterogeneous or non-grid feature extraction by different representations. This paper presents an attention-based decoder module similar as that in Transformer \cite{vaswani2017attention} to bridge other representations into a typical object detector built on a single representation format, in an end-to-end fashion. The other representations act as a set of \emph{key} instances to strengthen the main \emph{query} representation features in the vanilla detectors. Novel techniques are proposed towards efficient computation of the decoder module, including a \emph{key sampling} approach and a \emph{shared location embedding} approach.
- Information Technology > Data Science (0.76)
- Information Technology > Artificial Intelligence > Vision (0.67)