Skip to yearly menu bar Skip to main content


Poster

Iterative Feature Matching: Toward Provable Domain Generalization with Logarithmic Environments

Yining Chen · Elan Rosenfeld · Mark Sellke · Tengyu Ma · Andrej Risteski

Hall J (level 1) #405

Keywords: [ domain generalization theory ] [ invariant risk minimization ] [ Domain generalization ] [ out-of-distribution generalization ] [ IRM ] [ Deep Learning Theory ]


Abstract: Domain generalization aims at performing well on unseen test environments with data from a limited number of training environments. Despite a proliferation of proposed algorithms for this task, assessing their performance both theoretically and empirically is still very challenging. Distributional matching algorithms such as (Conditional) Domain Adversarial Networks [Ganin et al., 2016, Long et al., 2018] are popular and enjoy empirical success, but they lack formal guarantees. Other approaches such as Invariant Risk Minimization (IRM) require a prohibitively large number of training environments---linear in the dimension of the spurious feature space ds---even on simple data models like the one proposed by [Rosenfeld et al., 2021]. Under a variant of this model, we show that ERM and IRM can fail to find the optimal invariant predictor with o(ds) environments. We then present an iterative feature matching algorithm that is guaranteed with high probability to find the optimal invariant predictor after seeing only O(logds) environments. Our results provide the first theoretical justification for distribution-matching algorithms widely used in practice under a concrete nontrivial data model.

Chat is not available.