ZIN: When and How to Learn Invariance Without Environment Partition?

Yong Lin · Shengyu Zhu · Lu Tan · Peng Cui

Keywords: [ invariant risk minimization ] [ Out-of-Domain Generalization ] [ transfer learning ]

[ Abstract ]
[ Poster [ OpenReview
Spotlight presentation: Lightning Talks 5B-4
Thu 8 Dec 10:30 a.m. PST — 10:45 a.m. PST


It is commonplace to encounter heterogeneous data, of which some aspects of the data distribution may vary but the underlying causal mechanisms remain constant. When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models using this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information. Then, we propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information. We derive sufficient and necessary conditions for our framework to provably identify invariant features under a fairly general setting. Experimental results on both synthetic and real world datasets validate our analysis and demonstrate an improved performance of the proposed framework. Our findings also raise the need of making the role of inductive biases more explicit when learning invariant models without environment partition in future works. Codes are available at .

Chat is not available.