Timezone: »

Re-labeling Domains Improves Multi-Domain Generalization
Kowshik Thopalli · Pavan Turaga · Jayaraman Thiagarajan
Event URL: https://openreview.net/forum?id=UQGYhou3oEi »

Domain generalization (DG) methods aim to develop models that generalize to settings where the test distribution is different from the training data. In this paper, we focus on the challenging problem of multi-source zero-shot DG, where labeled training data from multiple source domains is available but with no access to data from the target domain. Though this problem has become an important topic of research, surprisingly, the naive solution of pooling all source data together and training a single classifier is highly competitive on standard benchmarks. More importantly, even sophisticated approaches that explicitly optimize for invariance across different domains do not necessarily provide non-trivial gains over ERM. We hypothesize that this behavior arises due to the poor definitions of the domain splits itself. In this paper, we make a first attempt to understand the role pre-defined domain labels play in the success of domain-aware DG methods. To this end, we ignore the domain labels that come with the dataset but instead alternatively perform unsupervised clustering to infer domain splits and train the DG method with these domain labels. We also introduce a novel regularization to improve the behavior of this alternating optimization process. We conduct analysis on two standard benchmarks PACS and VLCS and demonstrate the benefit of re-categorizing samples into new domain groups on DG performance.

Author Information

Kowshik Thopalli (Arizona State University)
Pavan Turaga (Arizona State University)
Jayaraman Thiagarajan (Lawrence Livermore National Laboratory)

More from the Same Authors