Timezone: »
In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information being difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled samples for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different classes into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity relationship between samples and super-classes. It encourages the unlabeled samples to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, \eg, an average accuracy of 76.76\% on CIFAR-10 with merely 1 label per class.
Author Information
Guan Gui (NanJing University)
Zhen Zhao (University of Sydney)

Ph.D. candidate @ University of Sydney.
Lei Qi (Southeast University)
Luping Zhou (University of Sydney)
Lei Wang (University of Wollonong)
Yinghuan Shi (Nanjing University)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 Poster: Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class »
Dates n/a. Room
More from the Same Authors
-
2022 Poster: Domain Generalization by Learning and Removing Domain-specific Features »
Yu Ding · Lei Wang · Bin Liang · Shuming Liang · Yang Wang · Fang Chen -
2023 Poster: Densely Annotated Synthetic Images Make Stronger Semantic Segmentation Models »
Lihe Yang · Xiaogang Xu · Bingyi Kang · Yinghuan Shi · Hengshuang Zhao -
2022 Spotlight: Domain Generalization by Learning and Removing Domain-specific Features »
Yu Ding · Lei Wang · Bin Liang · Shuming Liang · Yang Wang · Fang Chen -
2022 Spotlight: Lightning Talks 3A-1 »
Shu Ding · Wanxing Chang · Jiyang Guan · Mouxiang Chen · Guan Gui · Yue Tan · Shiyun Lin · Guodong Long · Yuze Han · Wei Wang · Zhen Zhao · Ye Shi · Jian Liang · Chenghao Liu · Lei Qi · Ran He · Jie Ma · Zemin Liu · Xiang Li · Hoang Tuan · Luping Zhou · Zhihua Zhang · Jianling Sun · Jingya Wang · LU LIU · Tianyi Zhou · Lei Wang · Jing Jiang · Yinghuan Shi -
2020 Poster: Improving Auto-Augment via Augmentation-Wise Weight Sharing »
Keyu Tian · Chen Lin · Ming Sun · Luping Zhou · Junjie Yan · Wanli Ouyang