Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: UniReps: Unifying Representations in Neural Models

SAM-CLIP: Merging Vision Foundation Models towards Semantic and Spatial Understanding

Haoxiang Wang · Pavan Kumar Anasosalu Vasu · Fartash Faghri · Raviteja Vemulapalli · Mehrdad Farajtabar · Sachin Mehta · Mohammad Rastegari · Oncel Tuzel · Hadi Pouransari

[ ] [ Project Page ]
 
presentation: UniReps: Unifying Representations in Neural Models
Fri 15 Dec 6:15 a.m. PST — 3:15 p.m. PST

Abstract:

The landscape of publicly available vision foundation models (VFMs), such as CLIP and SAM, is expanding rapidly. VFMs are endowed with distinct capabilities stemming from their pretraining objectives. For instance, CLIP excels in semantic understanding, while SAM specializes in spatial understanding for segmentation. In this work, we introduce a simple recipe based on multi-task distillation to efficiently merge VFMs into a unified model that assimilates their expertise. By applying our method to SAM and CLIP, we derive SAM-CLIP: a unified model that amalgamates the strengths of SAM and CLIP into a single backbone, making it apt for edge device applications. We show that SAM-CLIP learns richer visual representations, equipped with both localization and semantic features, suitable for a broad range of vision tasks. We further show that SAM-CLIP not only retains the foundational strengths of its precursor models but also introduces \emph{synergistic functionalities}, most notably in zero-shot semantic segmentation, where SAM-CLIP establishes new state-of-the-art results. It outperforms previous models that are specifically designed for this task by a large margin, including +6.8\% and +5.9\% mean IoU improvement on Pascal-VOC and COCO-Stuff datasets, respectively.

Chat is not available.