The Solution for the 5th GCAIAC Zero-shot Referring Expression Comprehension Challenge
Huang, Longfei, Yu, Feng, Guan, Zhihao, Wan, Zhonghua, Yang, Yang
–arXiv.org Artificial Intelligence
This report presents a solution for the zero-shot referring expression comprehension task. Visual-language multimodal base models (such as CLIP, SAM) have gained significant attention in recent years as a cornerstone of mainstream research. One of the key applications of multimodal base models lies in their ability to generalize to zero-shot downstream tasks. Unlike traditional referring expression comprehension, zero-shot referring expression comprehension aims to apply pre-trained visual-language models directly to the task without specific training. Recent studies have enhanced the zero-shot performance of multimodal base models in referring expression comprehension tasks by introducing visual prompts. To address the zero-shot referring expression comprehension challenge, we introduced a combination of visual prompts and considered the influence of textual prompts, employing joint prediction tailored to the data characteristics. Ultimately, our approach achieved accuracy rates of 84.825 on the A leaderboard and 71.460 on the B leaderboard, securing the first position.
arXiv.org Artificial Intelligence
Jul-6-2024
- Country:
- Europe > Portugal (0.15)
- North America > United States (0.14)
- Oceania > Australia (0.17)
- Genre:
- Research Report (1.00)
- Technology: