`

Timezone: »

 
Poster
On Calibration and Out-of-Domain Generalization
Yoav Wald · Amir Feder · Daniel Greenfeld · Uri Shalit

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Out-of-domain (OOD) generalization is a significant challenge for machine learning models. Many techniques have been proposed to overcome this challenge, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we show that under certain conditions, models which achieve \emph{multi-domain calibration} are provably free of spurious correlations. This leads us to propose multi-domain calibration as a measurable and trainable surrogate for the OOD performance of a classifier. We therefore introduce methods that are easy to apply and allow practitioners to improve multi-domain calibration by training or modifying an existing model, leading to better performance on unseen domains. Using four datasets from the recently proposed WILDS OOD benchmark, as well as the Colored MNIST, we demonstrate that training or tuning models so they are calibrated across multiple domains leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from both a practical and theoretical point of view.

Author Information

Yoav Wald (Johns Hopkins University)
Amir Feder (Technion - Israel Institute of Technology)
Daniel Greenfeld (Weizmann Institute)
Uri Shalit (Technion)

More from the Same Authors