Do Explanations make VQA Models more Predictable to a Human?
Chandrasekaran, Arjun, Prabhu, Viraj, Yadav, Deshraj, Chattopadhyay, Prithvijit, Parikh, Devi
–arXiv.org Artificial Intelligence
A rich line of research attempts to make deep neural networks more transparent by generating human-interpretable 'explanations' of their decision process, especially for interactive tasks like Visual Question Answering (VQA). In this work, we analyze if existing explanations indeed make a VQA model -- its responses as well as failures -- more predictable to a human. Surprisingly, we find that they do not. On the other hand, we find that human-in-the-loop approaches that treat the model as a black-box do.
arXiv.org Artificial Intelligence
Oct-29-2018
- Genre:
- Personal > Interview (0.68)
- Research Report > New Finding (0.46)
- Industry:
- Leisure & Entertainment (0.93)
- Transportation > Ground
- Road (0.46)
- Technology: