Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: New Frontiers with Foundation Models

Pseudo-Calibration: Improving Predictive Uncertainty Estimation in Domain Adaptation

Dapeng Hu · Jian Liang · Xinchao Wang · Chuan Sheng Foo

Keywords: [ Domain Adaptation; Predictive Uncertainty; Model Calibration ]


Abstract:

Unsupervised domain adaptation (UDA) improves model accuracy in an unlabeled target domain using a labeled source domain. However, UDA models often lack calibrated predictive uncertainty on target data, posing risks in safety-critical applications. In this paper, we address this under-explored challenge with Pseudo-Calibration (PseudoCal), a novel post-hoc calibration framework. In contrast to prior approaches, we consider UDA calibration as a target-domain specific unsupervised problem rather than a \emph{covariate shift} problem across domains. With a synthesized labeled pseudo-target set that captures the structure of the real target, we turn the unsupervised calibration problem into a supervised one, readily solvable with \emph{temperature scaling}. Extensive empirical evaluation across 5 diverse UDA scenarios involving 10 UDA methods, including unsupervised fine-tuning of foundation models such as CLIP, consistently demonstrates the superior performance of PseudoCal over alternative calibration methods.

Chat is not available.