Interpretable Open-Vocabulary Referring Object Detection with Reverse Contrast Attention
Juanico, Drandreb Earl O., Atienza, Rowel O., Go, Jeffrey Kenneth
–arXiv.org Artificial Intelligence
We propose Reverse Contrast Attention (RCA), a plug-in method that enhances object localization in vision-language transformers without retraining. RCA reweights final-layer attention by suppressing extremes and amplifying mid-level activations to let semantically relevant but subdued tokens guide predictions. We evaluate it on Open Vocabulary Referring Object Detection (OV-RefOD), introducing FitAP, a confidence-free average precision metric based on IoU and box area. RCA improves FitAP in 11 out of 15 open-source VLMs, with gains up to $+26.6\%$. Effectiveness aligns with attention sharpness and fusion timing; while late-fusion models benefit consistently, models like $\texttt{DeepSeek-VL2}$ also improve, pointing to capacity and disentanglement as key factors. RCA offers both interpretability and performance gains for multimodal transformers. Codes and dataset are available from https://github.com/earl-juanico/rca
arXiv.org Artificial Intelligence
Jul-31-2025
- Country:
- Asia > Philippines > Luzon > National Capital Region > City of Quezon (0.04)
- Genre:
- Research Report > New Finding (0.93)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning
- Neural Networks > Deep Learning (0.49)
- Performance Analysis > Accuracy (0.47)
- Natural Language > Large Language Model (0.68)
- Representation & Reasoning (1.00)
- Vision (1.00)
- Machine Learning
- Information Technology > Artificial Intelligence