Timezone: »
Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current human-annotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intra- and inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding.
Author Information
Jiuxiang Gu (Adobe Research)
Jason Kuen (Adobe Research)
Shafiq Joty (Nanyang Technological University)
Jianfei Cai (Monash University)
Vlad I. Morariu (Adobe Research)
Handong Zhao (Adobe Research)
Tong Sun (Adobe Research)
Accomplished research thought-leader and technology innovator with a 15+ years proven track of leadership in incubating new concepts through state-of-art machine learning methods/tools, developing advanced rapid prototypes, and delivering competitive technologies to market opportunities in a cross-disciplinary and cross-functional team environment. Held 22 issued US Patents, 40+ peer-reviewed publications in prestigious conferences and journals. Specialties: R&D leadership, leading-edge innovation strategy, machine learning, natural language processing and understanding, data-driven cybersecurity, social media analytics, big data center of excellence, service oriented architecture, distributed & cloud computing.
More from the Same Authors
-
2021 Spotlight: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation »
Junnan Li · Ramprasaath Selvaraju · Akhilesh Gotmare · Shafiq Joty · Caiming Xiong · Steven Chu Hong Hoi -
2021 : User-in-the-Loop Named Entity Recognition via Counterfactual Learning »
Tong Yu · Junda Wu · Ruiyi Zhang · Handong Zhao · Shuai Li -
2022 Poster: Delving into Out-of-Distribution Detection with Vision-Language Representations »
Yifei Ming · Ziyang Cai · Jiuxiang Gu · Yiyou Sun · Wei Li · Yixuan Li -
2021 Poster: Slow Learning and Fast Inference: Efficient Graph Similarity Computation via Knowledge Distillation »
Can Qin · Handong Zhao · Lichen Wang · Huan Wang · Yulun Zhang · Yun Fu -
2021 Poster: Align before Fuse: Vision and Language Representation Learning with Momentum Distillation »
Junnan Li · Ramprasaath Selvaraju · Akhilesh Gotmare · Shafiq Joty · Caiming Xiong · Steven Chu Hong Hoi -
2021 Poster: UniDoc: Unified Pretraining Framework for Document Understanding »
Jiuxiang Gu · Jason Kuen · Vlad I Morariu · Handong Zhao · Rajiv Jain · Nikolaos Barmpalios · Ani Nenkova · Tong Sun -
2020 Poster: Data Diversification: A Simple Strategy For Neural Machine Translation »
Xuan-Phi Nguyen · Shafiq Joty · Kui Wu · Ai Ti Aw