Skip to yearly menu bar Skip to main content


Poster

Towards Calibrated Robust Fine-Tuning of Vision-Language Models

Changdae Oh · Hyesu Lim · Mijoo Kim · Dongyoon Han · Sangdoo Yun · Jaegul Choo · Alexander Hauptmann · Zhi-Qi Cheng · Kyungwoo Song

East Exhibit Hall A-C #4309
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Improving out-of-distribution (OOD) generalization through in-distribution (ID) adaptation is a primary goal of robust fine-tuning methods beyond the naive fine-tuning approach. However, despite decent OOD generalization performance from recent robust fine-tuning methods, OOD confidence calibration for reliable machine learning has not been fully addressed. This work proposes a robust fine-tuning method that improves both OOD accuracy and calibration error in Vision Language Models (VLMs). Firstly, we show that both types of errors have a shared upper bound consisting of two terms of ID data: 1) calibration error and 2) the smallest singular value of the input covariance matrix. Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value, which is further aided by the self-distillation of a moving averaged model to achieve well-calibrated prediction. Starting from an empirical validation of our theoretical statements, we provide extensive experimental results on ImageNet distribution shift benchmarks that demonstrate the effectiveness of our method.

Live content is unavailable. Log in and register to view live content