Reviews: Overcoming Language Priors in Visual Question Answering with Adversarial Regularization

Neural Information Processing Systems 

This paper studies the problem of handling the langauge/text pariors in the task visual question answering. The great performance achieved by many state-of-the-art VQA systems are accomplished by heavily learning a better question encoding to better capture the correlations between the questions and answers, but ignore the image information. So the problem is important to the VQA research community. In general, the paper is well-written and easy to follow. And some concerns and sugggestions can be found as the following: 1) The major concern is the basic intuition of the question-only adversary: The question encoding q_i from the question encoder is not necessarily the same bias that lead the VQA model f to ignore the visual content. Since f can be a deep neutral network, for example, deep RNN or deep RNN-CNN to leverage both the question embedding and visual embedding, thus the non-linearity in f would make the question embedding as a image-aware represention to generate the answer distribution.