Timezone: »

Controllable Invariance through Adversarial Feature Learning
Qizhe Xie · Zihang Dai · Yulun Du · Eduard Hovy · Graham Neubig

Wed Dec 06 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #121

Learning meaningful representations that maintain the content necessary for a particular task while filtering away detrimental variations is a problem of great interest in machine learning. In this paper, we tackle the problem of learning representations invariant to a specific factor or trait of data. The representation learning process is formulated as an adversarial minimax game. We analyze the optimal equilibrium of such a game and find that it amounts to maximizing the uncertainty of inferring the detrimental factor given the representation while maximizing the certainty of making task-specific predictions. On three benchmark tasks, namely fair and bias-free classification, language-independent generation, and lighting-independent image classification, we show that the proposed framework induces an invariant representation, and leads to better generalization evidenced by the improved performance.

Author Information

Qizhe Xie (Carnegie Mellon University)
Zihang Dai (Carnegie Mellon University)
Yulun Du (Carnegie Mellon University)
Eduard Hovy (CMU)
Graham Neubig (Carnegie Mellon University)

More from the Same Authors