Robust Visual Question Answering: Datasets, Methods, and Future Challenges

Ma, Jie, Wang, Pinghui, Kong, Dechen, Wang, Zewei, Liu, Jun, Pei, Hongbin, Zhao, Junzhou

arXiv.org Artificial Intelligence 

Abstract--Visual question answering requires a system to provide an accurate natural language answer given an image and a natural language question. However, it is widely recognized that previous generic VQA methods often exhibit a tendency to memorize biases present in the training data rather than learning proper behaviors, such as grounding images before predicting answers. Therefore, these methods usually achieve high in-distribution but poor out-of-distribution performance. In recent years, various datasets and debiasing methods have been proposed to evaluate and enhance the VQA robustness, respectively. This paper provides the first comprehensive survey focused on this emerging fashion. Specifically, we first provide an overview of the development process of datasets from in-distribution and out-of-distribution perspectives. Then, we examine the evaluation metrics employed by these datasets. Thirdly, we propose a typology that presents the development process, similarities and differences, robustness comparison, and technical features of existing debiasing methods. Furthermore, we analyze and discuss the robustness of representative vision-and-language pre-training models on VQA. Finally, through a thorough review of the available literature and experimental analysis, we discuss the key areas for future research from various viewpoints. Question Answering (VQA) aims to build intelligent machines that are able to provide a natural views. Second, a variety of VQA methods have language answer accurately given an image and a natural been proposed, which can be classified into three groups language question about the image [1].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found