LOG: Active Model Adaptation for Label-Efficient OOD Generalization

Jie-Jing Shao · Lan-Zhe Guo · Xiao-wen Yang · Yu-Feng Li

Keywords: [ Active Learning ] [ out-of-distribution generalization ] [ Domain Adaptation ]

[ Abstract ]
[ Poster [ OpenReview
Spotlight presentation: Lightning Talks 2B-3
Tue 6 Dec 6 p.m. PST — 6:15 p.m. PST


This work discusses how to achieve worst-case Out-Of-Distribution (OOD) generalization for a variety of distributions based on a relatively small labeling cost. The problem has broad applications, especially in non-i.i.d. open-world scenarios. Previous studies either rely on a large amount of labeling cost or lack of guarantees about the worst-case generalization. In this work, we show for the first time that active model adaptation could achieve both good performance and robustness based on the invariant risk minimization principle. We propose \textsc{Log}, an interactive model adaptation framework, with two sub-modules: active sample selection and causal invariant learning. Specifically, we formulate the active selection as a mixture distribution separation problem and present an unbiased estimator, which could find the samples that violate the current invariant relationship, with a provable guarantee. The theoretical analysis supports that both sub-modules contribute to generalization. A large number of experimental results confirm the promising performance of the new algorithm.

Chat is not available.