Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

Does CLIP’s generalization performance mainly stem from high train-test similarity?

Prasanna Mayilvahanan · Thaddäus Wiedemer · Evgenia Rusak · Matthias Bethge · Wieland Brendel

Keywords: [ OOD robustness ] [ robustness ] [ CLIP ] [ ImageNet ] [ Distribution Shift ] [ foundation models ] [ Self-supervised learning ] [ LAION ] [ ImageNet-A ] [ ImageNet-V2 ] [ ObjectNet ] [ ImageNet-Sketch ] [ ImageNet-R ] [ vision language models ] [ contrastive learning ] [ generalization ]

[ ] [ Project Page ]
Fri 15 Dec 11:35 a.m. PST — 11:45 a.m. PST

Abstract:

Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs. Out of the box, CLIP shows stellar zero-shot and few-shot capabilities on a wide range of out-of-distribution (OOD) benchmarks, which prior works attribute mainly to today's large and comprehensive training dataset (like LAION). However, it is questionable how meaningful terms like out-of-distribution generalization are for CLIP as it seems likely that web-scale datasets like LAION simply contain many samples that are similar to common OOD benchmarks originally designed for ImageNet. To test this hypothesis, we retrain CLIP on pruned LAION splits that replicate ImageNet’s train-test similarity with respect to common OOD benchmarks. While we observe a performance drop on some benchmarks, surprisingly, CLIP’s overall performance remains high. This shows that high train-test similarity is insufficient to explain CLIP’s performance.

Chat is not available.