Timezone: »
Contrastive learning, which learns to contrast positive with negative pairs of samples, has been popular for self-supervised visual representation learning. Although great effort has been made to design proper positive pairs through data augmentation, few works attempt to generate optimal positives for each instance. Inspired by semantic consistency and computational advantage in latent space of pretrained generative models, this paper proposes to learn instance-specific latent transformations to generate Contrastive Optimal Positives (COP-Gen) for self-supervised contrastive learning. Specifically, we formulate COP-Gen as an instance-specific latent space navigator which minimizes the mutual information between the generated positive pair subject to the semantic consistency constraint. Theoretically, the learned latent transformation creates optimal positives for contrastive learning, which removes as much nuisance information as possible while preserving the semantics. Empirically, using generated positives by COP-Gen consistently outperforms other latent transformation methods and even real-image-based methods in self-supervised contrastive learning.
Author Information
Hong Chang (Institute of Computing Technology, Chinese Academy of Sciences)
Hong Chang (Institute of Computing Technology, Chinese Academy of Sciences)
Bingpeng MA (University of Chinese Academy of Sciences)
Shiguang Shan (Chinese Academy of Sciences)
Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Dates n/a. Room
More from the Same Authors
-
2023 Poster: Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes »
Minyang Hu · Hong Chang · Zong Guo · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Glance and Focus: Memory Prompting for Multi-Event Video Question Answering »
Ziyi Bai · Ruiping Wang · Xilin Chen -
2023 Poster: Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation »
Jiachen Liang · RuiBing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2022 Spotlight: Lightning Talks 3B-4 »
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Hong Chang · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie -
2021 Poster: HRFormer: High-Resolution Vision Transformer for Dense Predict »
YUHUI YUAN · Rao Fu · Lang Huang · Weihong Lin · Chao Zhang · Xilin Chen · Jingdong Wang -
2019 Poster: Cross Attention Network for Few-shot Classification »
Ruibing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2019 Poster: Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition »
Xuesong Niu · Hu Han · Shiguang Shan · Xilin Chen -
2014 Poster: Generalized Unsupervised Manifold Alignment »
Zhen Cui · Hong Chang · Shiguang Shan · Xilin Chen -
2014 Poster: Self-Paced Learning with Diversity »
Lu Jiang · Deyu Meng · Shoou-I Yu · Zhenzhong Lan · Shiguang Shan · Alexander Hauptmann