Timezone: »
Self-paced learning (SPL) is a recently proposed learning regime inspired by the learning process of humans and animals that gradually incorporates easy to more complex samples into training. Existing methods are limited in that they ignore an important aspect in learning: diversity. To incorporate this information, we propose an approach called self-paced learning with diversity (SPLD) which formalizes the preference for both easy and diverse samples into a general regularizer. This regularization term is independent of the learning objective, and thus can be easily generalized into various learning tasks. Albeit non-convex, the optimization of the variables included in this SPLD regularization term for sample selection can be globally solved in linearithmic time. We demonstrate that our method significantly outperforms the conventional SPL on three real-world datasets. Specifically, SPLD achieves the best MAP so far reported in literature on the Hollywood2 and Olympic Sports datasets.
Author Information
Lu Jiang (Carnegie Mellon University)
Deyu Meng (Carnegie Mellon University)
Shoou-I Yu (Carnegie Mellon University)
Zhenzhong Lan (Carnegie Mellon University)
Shiguang Shan (Chinese Academy of Sciences)
Alexander Hauptmann (Carnegie Mellon University)
More from the Same Authors
-
2022 Poster: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes »
Minyang Hu · Hong Chang · Zong Guo · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: SPAE: Semantic Pyramid AutoEncoder for Multimodal Generation with Frozen LLMs »
Lijun Yu · Yong Cheng · Zhiruo Wang · Vivek Kumar · Wolfgang Macherey · Yanping Huang · David Ross · Irfan Essa · Yonatan Bisk · Ming-Hsuan Yang · Kevin Murphy · Alexander Hauptmann · Lu Jiang -
2023 Poster: Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation »
Jiachen Liang · RuiBing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2022 Spotlight: Lightning Talks 3B-4 »
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Hong Chang · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie -
2022 Spotlight: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2020 Poster: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2020 Oral: Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation »
Guoliang Kang · Yunchao Wei · Yi Yang · Yueting Zhuang · Alexander Hauptmann -
2019 Poster: Cross Attention Network for Few-shot Classification »
Ruibing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2019 Poster: Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition »
Xuesong Niu · Hu Han · Shiguang Shan · Xilin Chen -
2014 Poster: Generalized Unsupervised Manifold Alignment »
Zhen Cui · Hong Chang · Shiguang Shan · Xilin Chen