Skip to yearly menu bar Skip to main content

Workshop: Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo)

CLIPA-v2: Scaling CLIP Training with 81.1% Zero-shot ImageNet Accuracy within a $10,000 Budget

Xianhang Li · Zeyu Wang · Cihang Xie

Abstract: The recent work CLIPA presents an inverse scaling law for CLIP training --- whereby the larger the image/text encoders used, the shorter the sequence length of image/text tokens that can be applied in training. This finding enables us to train high-performance CLIP models with significantly reduced computations. Building upon this work, we hereby present CLIPA-v2 with two key contributions. Technically, we find this inverse scaling law is also applicable in the finetuning stage, enabling further reduction in computational needs. Empirically, we explore CLIPA at scale, extending the experiments up to the H/14 model with approximately 13B image-text pairs seen during training. Our results are exciting --- by only allocating a budget of $\textdollar$10,000, our CLIP model achieves an impressive zero-shot ImageNet accuracy of 81.1%, surpassing the prior best CLIP model (from OpenCLIP, 80.1%) by 1.0\% and meanwhile reducing the computational cost by approximately $39\times$. Moreover, with an additional investment of $4,000, we can further elevate the zero-shot ImageNet accuracy to 81.8%. By upscaling a G/14 model, we've achieved an impressive state-of-the-art zero-shot ImageNet accuracy of 83.0%, relying solely on open-source data.

Chat is not available.