Timezone: »
Our work reveals a structured shortcoming of the existing mainstream self-supervised learning methods. Whereas self-supervised learning frameworks usually take the prevailing perfect instance level invariance hypothesis for granted, we carefully investigate the pitfalls behind. Particularly, we argue that the existing augmentation pipeline for generating multiple positive views naturally introduces out-of-distribution (OOD) samples that undermine the learning of the downstream tasks. Generating diverse positive augmentations on the input does not always pay off in benefiting downstream tasks. To overcome this inherent deficiency, we introduce a lightweight latent variable model UOTA, targeting the view sampling issue for self-supervised learning. UOTA adaptively searches for the most important sampling region to produce views, and provides viable choice for outlier-robust self-supervised learning approaches. Our method directly generalizes to many mainstream self-supervised learning approaches, regardless of the loss's nature contrastive or not. We empirically show UOTA's advantage over the state-of-the-art self-supervised paradigms with evident margin, which well justifies the existence of the OOD sample issue embedded in the existing approaches. Especially, we theoretically prove that the merits of the proposal boil down to guaranteed estimator variance and bias reduction. Code is available: https://github.com/ssl-codelab/uota.
Author Information
Yu Wang (JD AI Research)
Jingyang Lin (SUN YAT-SEN UNIVERSITY)
Jingjing Zou (University of California, San Diego)
Yingwei Pan (JD AI Research)
Ting Yao (JD AI Research)
Tao Mei (AI Research of JD.com)
More from the Same Authors
-
2022 Poster: Out-of-Distribution Detection via Conditional Kernel Independence Model »
Yu Wang · Jingjing Zou · Jingyang Lin · Qing Ling · Yingwei Pan · Ting Yao · Tao Mei -
2022 Poster: Generalized One-shot Domain Adaptation of Generative Adversarial Networks »
Zicheng Zhang · Yinglu Liu · Congying Han · Tiande Guo · Ting Yao · Tao Mei -
2022 Spotlight: Lightning Talks 6B-4 »
Junjie Chen · Chuanxia Zheng · JINLONG LI · Yu Shi · Shichao Kan · Yu Wang · FermÃn Travi · Ninh Pham · Lei Chai · Guobing Gan · Tung-Long Vuong · Gonzalo Ruarte · Tao Liu · Li Niu · Jingjing Zou · Zequn Jie · Peng Zhang · Ming LI · Yixiong Liang · Guolin Ke · Jianfei Cai · Gaston Bujia · Sunzhu Li · Siyuan Zhou · Jingyang Lin · Xu Wang · Min Li · Zhuoming Chen · Qing Ling · Xiaolin Wei · Xiuqing Lu · Shuxin Zheng · Dinh Phung · Yigang Cen · Jianlou Si · Juan Esteban Kamienkowski · Jianxin Wang · Chen Qian · Lin Ma · Benyou Wang · Yingwei Pan · Tie-Yan Liu · Liqing Zhang · Zhihai He · Ting Yao · Tao Mei -
2022 Spotlight: Out-of-Distribution Detection via Conditional Kernel Independence Model »
Yu Wang · Jingjing Zou · Jingyang Lin · Qing Ling · Yingwei Pan · Ting Yao · Tao Mei -
2020 Poster: Joint Contrastive Learning with Infinite Possibilities »
Qi Cai · Yu Wang · Yingwei Pan · Ting Yao · Tao Mei -
2020 Spotlight: Joint Contrastive Learning with Infinite Possibilities »
Qi Cai · Yu Wang · Yingwei Pan · Ting Yao · Tao Mei