Skip to yearly menu bar Skip to main content


Poster

UMFC: Unsupervised Multi-Domain Feature Calibration for Vision-Language Models

Jiachen Liang · RuiBing Hou · Minyang Hu · Hong Chang · Shiguang Shan · Xilin Chen

East Exhibit Hall A-C #3703
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Pre-trained vision-language models ($\textit{e.g.}$, CLIP) have shown powerful zero-shot transfer capabilities. But they still struggle with domain shifts and typically require labeled data to adapt to downstream tasks, which could be costly. In this work, we aim to leverage unlabeled data that naturally spans multiple domains to enhance the transferability of vision-language models. Nevertheless, we have identified inherent model bias within CLIP, notably in its visual and text encoders. Specifically, we observe that CLIP’s visual encoder tends to prioritize encoding domain over discriminative category information, meanwhile its text encoder exhibits a preference for domain-relevant classes. To mitigate this model bias, we propose a $\textit{training-free}$ and $\textit{label-free}$ feature calibration method, Unsupervised Multi-domain Feature Calibration (UMFC). Specifically,UMFC estimates image-level biases from domain-specific features and text-level biases from the direction of domain transition. These biases are subsequently subtracted from original image and text features separately, to render them domain-invariant. We evaluate our method on multiple settings including transductive learning and test-time adaptation. Extensive experiments show that our method outperforms CLIP and performs on par with the state-of-the-arts that need additional annotations or optimization.

Live content is unavailable. Log in and register to view live content