Timezone: »

 
Poster
Multimodal Residual Learning for Visual QA
Jin-Hwa Kim · Sang-Woo Lee · Donghyun Kwak · Min-Oh Heo · Jeonghee Kim · Jung-Woo Ha · Byoung-Tak Zhang

Tue Dec 06 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #143

Deep neural networks continue to advance the state-of-the-art of image recognition tasks with various methods. However, applications of these methods to multimodality remain limited. We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning. Unlike the deep residual learning, MRN effectively learns the joint representation from visual and language information. The main idea is to use element-wise multiplication for the joint residual mappings exploiting the residual learning of the attentional models in recent studies. Various alternative models introduced by multimodality are explored based on our study. We achieve the state-of-the-art results on the Visual QA dataset for both Open-Ended and Multiple-Choice tasks. Moreover, we introduce a novel method to visualize the attention effect of the joint representations for each learning block using back-propagation algorithm, even though the visual features are collapsed without spatial information.

Author Information

Jin-Hwa Kim (Seoul National University)
Sang-Woo Lee (Seoul National University)
Donghyun Kwak (Seoul National University)
Min-Oh Heo (Seoul National University)

Ph.D Student in Seoul National University

Jeonghee Kim (Naver Labs)
Jung-Woo Ha (Naver Labs)
Byoung-Tak Zhang (Seoul National University)

More from the Same Authors