Timezone: »
Semantic segmentation has a broad range of applications, but its real-world impact has been significantly limited by the prohibitive annotation costs necessary to enable deployment. Segmentation methods that forgo supervision can side-step these costs, but exhibit the inconvenient requirement to provide labelled examples from the target distribution to assign concept names to predictions. An alternative line of work in language-image pre-training has recently demonstrated the potential to produce models that can both assign names across large vocabularies of concepts and enable zero-shot transfer for classification, but do not demonstrate commensurate segmentation abilities.We leverage the retrieval abilities of one such language-image pre-trained model, CLIP, to dynamically curate training sets from unlabelled images for arbitrary collections of concept names, and leverage the robust correspondences offered by modern image representations to co-segment entities among the resulting collections. The synthetic segment collections are then employed to construct a segmentation model (without requiring pixel labels) whose knowledge of concepts is inherited from the scalable pre-training process of CLIP. We demonstrate that our approach, termed Retrieve and Co-segment (ReCo) performs favourably to conventional unsupervised segmentation approaches while inheriting the convenience of nameable predictions and zero-shot transfer. We also demonstrate ReCo’s ability to generate specialist segmenters for extremely rare objects.
Author Information
Gyungin Shin (Visual Geometry Group, Oxford)
Weidi Xie (University of Oxford)
Samuel Albanie (Oxford University)
More from the Same Authors
-
2022 Poster: Segmenting Moving Objects via an Object-Centric Layered Representation »
Junyu Xie · Weidi Xie · Andrew Zisserman -
2022 Poster: RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection »
Hangjie Yuan · Jianwen Jiang · Samuel Albanie · Tao Feng · Ziyuan Huang · Dong Ni · Mingqian Tang -
2022 Spotlight: Lightning Talks 6A-3 »
Junyu Xie · Chengliang Zhong · Ali Ayub · Sravanti Addepalli · Harsh Rangwani · Jiapeng Tang · Yuchen Rao · Zhiying Jiang · Yuqi Wang · Xingzhe He · Gene Chou · Ilya Chugunov · Samyak Jain · Yuntao Chen · Weidi Xie · Sumukh K Aithal · Carter Fendley · Lev Markhasin · Yiqin Dai · Peixing You · Bastian Wandt · Yinyu Nie · Helge Rhodin · Felix Heide · Ji Xin · Angela Dai · Andrew Zisserman · Bi Wang · Xiaoxue Chen · Mayank Mishra · ZHAO-XIANG ZHANG · Venkatesh Babu R · Justus Thies · Ming Li · Hao Zhao · Venkatesh Babu R · Jimmy Lin · Fuchun Sun · Matthias Niessner · Guyue Zhou · Xiaodong Mu · Chuang Gan · Wenbing Huang -
2022 Spotlight: Segmenting Moving Objects via an Object-Centric Layered Representation »
Junyu Xie · Weidi Xie · Andrew Zisserman -
2022 Spotlight: RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection »
Hangjie Yuan · Jianwen Jiang · Samuel Albanie · Tao Feng · Ziyuan Huang · Dong Ni · Mingqian Tang -
2022 Poster: Associating Objects and Their Effects in Video through Coordination Games »
Erika Lu · Forrester Cole · Weidi Xie · Tali Dekel · Bill Freeman · Andrew Zisserman · Michael Rubinstein -
2021 Workshop: The pre-registration workshop: an alternative publication model for machine learning research »
Samuel Albanie · João Henriques · Luca Bertinetto · Alex Hernandez-Garcia · Hazel Doughty · Gul Varol -
2020 Workshop: The pre-registration experiment: an alternative publication model for machine learning research »
Luca Bertinetto · João Henriques · Samuel Albanie · Michela Paganini · Gul Varol -
2020 Poster: Self-supervised Co-Training for Video Representation Learning »
Tengda Han · Weidi Xie · Andrew Zisserman -
2018 Poster: Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks »
Jie Hu · Li Shen · Samuel Albanie · Gang Sun · Andrea Vedaldi