Timezone: »

Learning Robust Global Representations by Penalizing Local Predictive Power
Haohan Wang · Songwei Ge · Zachary Lipton · Eric Xing

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #57

Despite their renowned in-domain predictive power, convolutional neural networks are known to rely more on high-frequency patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structures of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization out of the domain. Additionally, to evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images that matches the ImageNet classification validation set in scale and dimension.

Author Information

Haohan Wang (Carnegie Mellon University)
Songwei Ge (Carnegie Mellon University)
Zachary Lipton (Carnegie Mellon University)
Eric Xing (Petuum Inc. / Carnegie Mellon University)

More from the Same Authors