Timezone: »

ZIN: When and How to Learn Invariance Without Environment Partition?
Yong Lin · Shengyu Zhu · Lu Tan · Peng Cui


It is commonplace to encounter heterogeneous data, of which some aspects of the data distribution may vary but the underlying causal mechanisms remain constant. When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models using this environment partition. It is hence tempting to utilize the inherent heterogeneity even when environment partition is not provided. Unfortunately, in this work, we show that learning invariant features under this circumstance is fundamentally impossible without further inductive biases or additional information. Then, we propose a framework to jointly learn environment partition and invariant representation, assisted by additional auxiliary information. We derive sufficient and necessary conditions for our framework to provably identify invariant features under a fairly general setting. Experimental results on both synthetic and real world datasets validate our analysis and demonstrate an improved performance of the proposed framework. Our findings also raise the need of making the role of inductive biases more explicit when learning invariant models without environment partition in future works. Codes are available at https://github.com/linyongver/ZIN_official .

Author Information

Yong Lin (The Hong Kong University of Science and Technology)

I am an CSE PhD sudent in HKUST, supervised by Professor Tong Zhang. Out of Distribution Generalization , Robustess of Deep Learning and Learning theory are my research interests. Particularly, we are now working on topics related to Invariant Learning. If you are also interested in these fields or just my works, you can come to have a chat me.

Shengyu Zhu (Ubiquant)
Lu Tan (Tsinghua University)
Peng Cui (Tsinghua University)

More from the Same Authors