Timezone: »

Towards Reasoning-Aware Explainable VQA
Rakesh Vaideeswaran · Feng Gao · ABHINAV MATHUR · Govindarajan Thattai
Event URL: https://openreview.net/forum?id=Vz_gE-nrFu9 »

The domain of joint vision-language understanding, especially in the context of reasoning in Visual Question Answering (VQA) models, has garnered significant attention in the recent past. While most of the existing VQA models focus on improving the accuracy of VQA, the way models arrive at an answer is oftentimes a black box. As a step towards making the VQA task more explainable and interpretable, our method is built upon the SOTA VQA framework by augmenting it with an end-to-end explanation generation module. In this paper, we investigate two network architectures, including LSTM and Transformer decoder, as the explanation generator. Our method generates human-readable explanations while maintaining SOTA VQA accuracy on the GQA-REX (77.49%) and VQA-E (71.48%) datasets. Approximately 65.16% of the generated explanations are approved to be valid by humans. Roughly 60.5% of the generated explanations are valid and lead to the correct answers.

Author Information

Rakesh Vaideeswaran (University of Illinois, Urbana-Champaign)
Feng Gao (UCLA)

Technically driven and result focused professional with demonstrated ownership during 16 years of work experience, looking to provide differentiated solutions as a member of an engineering team utilizing expertise in Pattern Recognition, Machine Learning and Deep Learning.

Govindarajan Thattai (Amazon)

More from the Same Authors