Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Causal Representation Learning

Learning to ignore: Single Source Domain Generalization via Oracle Regularization

Dong Kyu Cho · Sanghack Lee

Keywords: [ causal representation learning ] [ Out-of-Distribution Robustness ] [ Domain generalization ]


Abstract:

Machine learning frequently suffers from the discrepancy in data distribution, commonly known as domain shift. Single-source Domain Generalization (sDG) is a task designed to simulate domain shift artificially, in order to train a model that can generalize well to multiple unseen target domains from a single source domain. A popular approach is to learn robustness via the alignment of augmented samples. However, prior works frequently overlooked what is learned from such alignment. In this paper, we study the effectiveness of augmentation-based sDG methods via a causal interpretation of the data generating process. We highlight issues in using augmentation for generalization, namely, the distinction between domain invariance and augmentation invariance. To alleviate these issues, we introduce a novel regularization method that leverages pretrained models to guide the learning process via a feature-level regularization, which we name PROF (Progressive mutual information Regularization for Online distillation of Frozen oracles). PROF can be applied to conventional augmentation-based methods to moderate the impact of stochasticity in models repeatedly trained on augmented data, encouraging the model to learn domain-invariant representations. We empirically show that PROF stabilizes the learning process for sDG.

Chat is not available.