Spotlight
Heterogeneous Graph Learning for Visual Commonsense Reasoning
Weijiang Yu · Jingwen Zhou · Weihao Yu · Xiaodan Liang · Nong Xiao

Wed Dec 11th 04:45 -- 04:50 PM @ West Ballrooms A + B

Visual commonsense reasoning task aims at leading the research field into solving cognition-level reasoning with the ability to predict correct answers and meanwhile providing convincing reasoning paths, resulting in three sub-tasks i.e., Q->A, QA->R and Q->AR. It poses great challenges over the proper semantic alignment between vision and linguistic domains and knowledge reasoning to generate persuasive reasoning paths. Existing works either resort to a powerful end-to-end network that cannot produce interpretable reasoning paths or solely explore intra-relationship of visual objects (homogeneous graph) while ignoring the cross-domain semantic alignment among visual concepts and linguistic words. In this paper, we propose a new Heterogeneous Graph Learning (HGL) framework for seamlessly integrating the intra-graph and inter-graph reasoning in order to bridge the vision and language domain. Our HGL consists of a primal vision-to-answer heterogeneous graph (VAHG) module and a dual question-to-answer heterogeneous graph (QAHG) module to interactively refine reasoning paths for semantic agreement. Moreover, our HGL integrates a contextual voting module to exploit a long-range visual context for better global reasoning. Experiments on the large-scale Visual Commonsense Reasoning benchmark demonstrate the superior performance of our proposed modules on three tasks (improving 5% accuracy on Q->A, 3.5% on QA->R, 5.8% on Q->AR).

Author Information

Weijiang Yu (Sun Yat-sen University)
Jingwen Zhou (Sun Yat-sen University)
Weihao Yu (Sun Yat-sen University)
Xiaodan Liang (Sun Yat-sen University)
Nong Xiao (Sun Yat-sen University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors