Timezone: »
This paper revisits visual representation in knowledge-based visual question answering (VQA) and demonstrates that using regional information in a better way can significantly improve the performance. While visual representation is extensively studied in traditional VQA, it is under-explored in knowledge-based VQA even though these two tasks share the common spirit, i.e., rely on visual input to answer the question. Specifically, we observe in most state-of-the-art knowledge-based VQA methods: 1) visual features are extracted either from the whole image or in a sliding window manner for retrieving knowledge, and the important relationship within/among object regions is neglected; 2) visual features are not well utilized in the final answering model, which is counter-intuitive to some extent. Based on these observations, we propose a new knowledge-based VQA method REVIVE, which tries to utilize the explicit information of object regions not only in the knowledge retrieval stage but also in the answering model. The key motivation is that object regions and inherent relationship are important for knowledge-based VQA. We perform extensive experiments on the standard OK-VQA dataset and achieve new state-of the-art performance, i.e., 58.0 accuracy, surpassing previous state-of-the-art method by a large margin (+3.6%). We also conduct detailed analysis and show the necessity of regional information in different framework components for knowledge-based VQA. Code is publicly available at https://github.com/yzleroy/REVIVE.
Author Information
Yuanze Lin (University of Washington)
Yujia Xie (Georgia Institute of Technology)
Dongdong Chen (Microsoft Cloud AI)
Yichong Xu (Microsoft)
Chenguang Zhu (Stanford University)
Lu Yuan (Microsoft)
More from the Same Authors
-
2020 : Session B, Poster 4: Differentiable Top-k With Optimal Transport »
Yujia Xie -
2021 Spotlight: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2022 Poster: OmniVL: One Foundation Model for Image-Language and Video-Language Tasks »
Junke Wang · Dongdong Chen · Zuxuan Wu · Chong Luo · Luowei Zhou · Yucheng Zhao · Yujia Xie · Ce Liu · Yu-Gang Jiang · Lu Yuan -
2022 : Adaptive Pre-training of Language Models for Better Logical Reasoning »
Soumya Sanyal · Yichong Xu · Shuohang Wang · Ziyi Yang · Reid Pryzant · Wenhao Yu · Chenguang Zhu · Xiang Ren -
2022 : Empowering Language Models with Knowledge Graph Reasoning for Question Answering »
Ziniu Hu · Yichong Xu · Wenhao Yu · Shuohang Wang · Ziyi Yang · Chenguang Zhu · Kai-Wei Chang · Yizhou Sun -
2023 Poster: Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection »
Lingchen Meng · Xiyang Dai · Jianwei Yang · Dongdong Chen · Yinpeng Chen · Mengchen Liu · Yi-Ling Chen · Zuxuan Wu · Lu Yuan · Yu-Gang Jiang -
2023 Poster: Uni-ControlNet: All-in-One Control to Text-to-Image Diffusion Models »
Shihao Zhao · Dongdong Chen · Yen-Chun Chen · Jianmin Bao · Shaozhe Hao · Lu Yuan · Kwan-Yee K. Wong -
2023 Poster: Sparse Modular Activation for Efficient Sequence Modeling »
Liliang Ren · Yang Liu · Shuohang Wang · Yichong Xu · Chenguang Zhu · Cheng Xiang Zhai -
2023 Poster: Any-to-Any Generation via Composable Diffusion »
Zineng Tang · Ziyi Yang · Chenguang Zhu · Michael Zeng · Mohit Bansal -
2022 Spotlight: OmniVL: One Foundation Model for Image-Language and Video-Language Tasks »
Junke Wang · Dongdong Chen · Zuxuan Wu · Chong Luo · Luowei Zhou · Yucheng Zhao · Yujia Xie · Ce Liu · Yu-Gang Jiang · Lu Yuan -
2022 Poster: K-LITE: Learning Transferable Visual Models with External Knowledge »
Sheng Shen · Chunyuan Li · Xiaowei Hu · Yujia Xie · Jianwei Yang · Pengchuan Zhang · Zhe Gan · Lijuan Wang · Lu Yuan · Ce Liu · Kurt Keutzer · Trevor Darrell · Anna Rohrbach · Jianfeng Gao -
2022 Poster: Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners »
Zhenhailong Wang · Manling Li · Ruochen Xu · Luowei Zhou · Jie Lei · Xudong Lin · Shuohang Wang · Ziyi Yang · Chenguang Zhu · Derek Hoiem · Shih-Fu Chang · Mohit Bansal · Heng Ji -
2022 Poster: Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning »
Yujia Xie · Luowei Zhou · Xiyang Dai · Lu Yuan · Nguyen Bach · Ce Liu · Michael Zeng -
2022 Poster: GLIPv2: Unifying Localization and Vision-Language Understanding »
Haotian Zhang · Pengchuan Zhang · Xiaowei Hu · Yen-Chun Chen · Liunian Li · Xiyang Dai · Lijuan Wang · Lu Yuan · Jenq-Neng Hwang · Jianfeng Gao -
2021 Poster: Stronger NAS with Weaker Predictors »
Junru Wu · Xiyang Dai · Dongdong Chen · Yinpeng Chen · Mengchen Liu · Ye Yu · Zhangyang Wang · Zicheng Liu · Mei Chen · Lu Yuan -
2021 Poster: Focal Attention for Long-Range Interactions in Vision Transformers »
Jianwei Yang · Chunyuan Li · Pengchuan Zhang · Xiyang Dai · Bin Xiao · Lu Yuan · Jianfeng Gao -
2021 Poster: Chasing Sparsity in Vision Transformers: An End-to-End Exploration »
Tianlong Chen · Yu Cheng · Zhe Gan · Lu Yuan · Lei Zhang · Zhangyang Wang -
2020 : Poster Session B »
Ravichandra Addanki · Andreea-Ioana Deac · Yujia Xie · Francesco Landolfi · Antoine Prouvost · Claudius Gros · Renzo Massobrio · Abhishek Cauligi · Simon Alford · Hanjun Dai · Alberto Franzin · Nitish Kumar Panigrahy · Brandon Kates · Iddo Drori · Taoan Huang · Zhou Zhou · Marin Vlastelica · Anselm Paulus · Aaron Zweig · Minsu Cho · Haiyan Yin · Michal Lisicki · Nan Jiang · Haoran Sun -
2020 Poster: Passport-aware Normalization for Deep Model Protection »
Jie Zhang · Dongdong Chen · Jing Liao · Weiming Zhang · Gang Hua · Nenghai Yu -
2020 Poster: Differentiable Top-k with Optimal Transport »
Yujia Xie · Hanjun Dai · Minshuo Chen · Bo Dai · Tuo Zhao · Hongyuan Zha · Wei Wei · Tomas Pfister -
2020 Poster: GreedyFool: Distortion-Aware Sparse Adversarial Attack »
Xiaoyi Dong · Dongdong Chen · Jianmin Bao · Chuan Qin · Lu Yuan · Weiming Zhang · Nenghai Yu · Dong Chen -
2020 Poster: Preference-based Reinforcement Learning with Finite-Time Guarantees »
Yichong Xu · Ruosong Wang · Lin Yang · Aarti Singh · Artur Dubrawski -
2020 Spotlight: Preference-based Reinforcement Learning with Finite-Time Guarantees »
Yichong Xu · Ruosong Wang · Lin Yang · Aarti Singh · Artur Dubrawski -
2019 : Poster Session »
Rishav Chourasia · Yichong Xu · Corinna Cortes · Chien-Yi Chang · Yoshihiro Nagano · So Yeon Min · Benedikt Boecking · Phi Vu Tran · Kamyar Ghasemipour · Qianggang Ding · Shouvik Mani · Vikram Voleti · Rasool Fakoor · Miao Xu · Kenneth Marino · Lisa Lee · Volker Tresp · Jean-Francois Kagy · Marvin Zhang · Barnabas Poczos · Dinesh Khandelwal · Adrien Bardes · Evan Shelhamer · Jiacheng Zhu · Ziming Li · Xiaoyan Li · Dmitrii Krasheninnikov · Ruohan Wang · Mayoore Jaiswal · Emad Barsoum · Suvansh Sanjeev · Theeraphol Wattanavekin · Qizhe Xie · Sifan Wu · Yuki Yoshida · David Kanaa · Sina Khoshfetrat Pakazad · Mehdi Maasoumy -
2019 Poster: Meta Learning with Relational Information for Short Sequences »
Yujia Xie · Haoming Jiang · Feng Liu · Tuo Zhao · Hongyuan Zha -
2017 Poster: Noise-Tolerant Interactive Learning Using Pairwise Comparisons »
Yichong Xu · Hongyang Zhang · Aarti Singh · Artur Dubrawski · Kyle Miller