Single-Modal Entropy based Active Learning for Visual Question Answering
Kim, Dong-Jin, Cho, Jae Won, Choi, Jinsoo, Jung, Yunjae, Kweon, In So
–arXiv.org Artificial Intelligence
Constructing a large-scale labeled dataset in the real world, especially for high-level tasks (eg, Visual Question Answering), can be expensive and time-consuming. In addition, with the ever-growing amounts of data and architecture complexity, Active Learning has become an important aspect of computer vision research. In this work, we address Active Learning in the multi-modal setting of Visual Question Answering (VQA). In light of the multi-modal inputs, image and question, we propose a novel method for effective sample acquisition through the use of ad hoc single-modal branches for each input to leverage its information. Our mutual information based sample acquisition strategy Single-Modal Entropic Measure (SMEM) in addition to our self-distillation technique enables the sample acquisitor to exploit all present modalities and find the most informative samples. Our novel idea is simple to implement, cost-efficient, and readily adaptable to other multi-modal tasks. We confirm our findings on various VQA datasets through state-of-the-art performance by comparing to existing Active Learning baselines.
arXiv.org Artificial Intelligence
Oct-21-2021
- Country:
- Asia (0.28)
- North America > United States
- Wisconsin (0.14)
- Genre:
- Research Report > Promising Solution (0.86)
- Technology: