`

Timezone: »

 
Poster
An Empirical Investigation of Domain Generalization with Empirical Risk Minimizers
Ramakrishna Vedantam · David Lopez-Paz · David Schwab

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

Recent work demonstrates that deep neural networks trained using Empirical Risk Minimization (ERM) can generalize under distribution shift, outperforming specialized training algorithms for domain generalization. The goal of this paper is to further understand this phenomenon. In particular, we study the extent to which the seminal domain adaptation theory of Ben-David et al. (2007) explains the performance of ERMs. Perhaps surprisingly, we find that this theory does not provide a tight explanation of the out-of-domain generalization observed across a large number of ERM models trained on three popular domain generalization datasets. This motivates us to investigate other possible measures—that, however, lack theory—which could explain generalization in this setting. Our investigation reveals that measures relating to the Fisher information, predictive entropy, and maximum mean discrepancy are good predictors of the out-of-distribution generalization of ERM models. We hope that our work helps galvanize the community towards building a better understanding of when deep networks trained with ERM generalize out-of-distribution.

Author Information

Ramakrishna Vedantam (Facebook AI Research)
David Lopez-Paz (Facebook AI Research)
David Schwab (CUNY Graduate Center)

More from the Same Authors