Skip to yearly menu bar Skip to main content


Poster

Self-Supervised Relationship Probing

Jiuxiang Gu · Jason Kuen · Shafiq Joty · Jianfei Cai · Vlad I. Morariu · Handong Zhao · Tong Sun

Poster Session 0 #17

Abstract:

Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current human-annotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intra- and inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding.

Chat is not available.