Timezone: »

Unsupervised Representation Learning by Invariance Propagation
Feng Wang · Huaping Liu · Di Guo · Sun Fuchun

Mon Dec 07 08:20 PM -- 08:30 PM (PST) @ Orals & Spotlights: Representation/Relational

Unsupervised learning methods based on contrastive learning have drawn increasing attention and achieved promising results. Most of them aim to learn representations invariant to instance-level variations, which are provided by different views of the same instance. In this paper, we propose Invariance Propagation to focus on learning representations invariant to category-level variations, which are provided by different instances from the same category. Our method recursively discovers semantically consistent samples residing in the same high-density regions in representation space. We demonstrate a hard sampling strategy to concentrate on maximizing the agreement between the anchor sample and its hard positive samples, which provide more intra-class variations to help capture more abstract invariance. As a result, with a ResNet-50 as the backbone, our method achieves 71.3% top-1 accuracy on ImageNet linear classification and 78.2% top-5 accuracy fine-tuning on only 1% labels, surpassing previous results. We also achieve state-of-the-art performance on other downstream tasks, including linear classification on Places205 and Pascal VOC, and transfer learning on small scale datasets.

Author Information

Feng Wang (Tsinghua University)
Huaping Liu (Tsinghua University)
Di Guo (Tsinghua University)
Sun Fuchun (Tsinghua university)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors