Skip to yearly menu bar Skip to main content


Poster

Towards Flexible Visual Relationship Segmentation

Fangrui Zhu · Jianwei Yang · Huaizu Jiang

[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Visual relationship understanding has been studied separately in human-object interaction (HOI) detection, scene graph generation (SGG), and referring expression comprehension (REC) tasks. Given the complexity and interconnectedness of these tasks, it is crucial to have a flexible framework that can effectively address these tasks in a cohesive manner. In this work, we propose Flex-VRS, a single model that seamlessly integrates the above three aspects in standard and promptable visual relationship segmentation, and further possesses the capability for open-vocabulary segmentation to adapt to novel scenarios. Flex-VRS leverages the synergy between text and image modalities, to ground various types of relationships from images and use textual features from vision-language models to visual conceptual understanding. Empirical validation across various datasets demonstrates that our framework outperforms existing models in standard, promptable, and open-vocabulary tasks, e.g., +1.9 mAP on HICO-DET, +11.4 Acc on VRD, +4.7 mAP on unseen HICO-DET. Our work represents a significant step towards a more intuitive, comprehensive, and scalable understanding of visual relationships.

Live content is unavailable. Log in and register to view live content