Timezone: »

Quantifying and Improving Transferability in Domain Generalization
Guojun Zhang · Han Zhao · Yaoliang Yu · Pascal Poupart

Tue Dec 07 04:30 PM -- 06:00 PM (PST) @ None #None

Out-of-distribution generalization is one of the key challenges when transferring a model from the lab to the real world. Existing efforts mostly focus on building invariant features among source and target domains. Based on invariant features, a high-performing classifier on source domains could hopefully behave equally well on a target domain. In other words, we hope the invariant features to be \emph{transferable}. However, in practice, there are no perfectly transferable features, and some algorithms seem to learn ``more transferable'' features than others. How can we understand and quantify such \emph{transferability}? In this paper, we formally define transferability that one can quantify and compute in domain generalization. We point out the difference and connection with common discrepancy measures between domains, such as total variation and Wasserstein distance. We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability. Empirically, we evaluate the transferability of the feature embeddings learned by existing algorithms for domain generalization. Surprisingly, we find that many algorithms are not quite learning transferable features, although few could still survive. In light of this, we propose a new algorithm for learning transferable features and test it over various benchmark datasets, including RotatedMNIST, PACS, Office-Home and WILDS-FMoW. Experimental results show that the proposed algorithm achieves consistent improvement over many state-of-the-art algorithms, corroborating our theoretical findings.

Author Information

Guojun Zhang (University of Waterloo)

I am a third-year Ph.D. student in the David R. Cheriton School of Computer Science at the University of Waterloo, and am also a student affiliate of the Vector Institute. My supervisors are Pascal Poupart and Yaoliang Yu. I am working on optimization problems in machine learning.

Han Zhao (Carnegie Mellon University)
Yaoliang Yu (Carnegie Mellon University)
Pascal Poupart (University of Waterloo & Vector Institute)

More from the Same Authors