Timezone: »
Few-shot classification aims to recognize unlabeled samples from unseen classes given only few labeled samples. The unseen classes and low-data problem make few-shot classification very challenging. Many existing approaches extracted features from labeled and unlabeled samples independently, as a result, the features are not discriminative enough. In this work, we propose a novel Cross Attention Network to address the challenging problems in few-shot classification. Firstly, Cross Attention Module is introduced to deal with the problem of unseen classes. The module generates cross attention maps for each pair of class feature and query sample feature so as to highlight the target object regions, making the extracted feature more discriminative. Secondly, a transductive inference algorithm is proposed to alleviate the low-data problem, which iteratively utilizes the unlabeled query set to augment the support set, thereby making the class features more representative. Extensive experiments on two benchmarks show our method is a simple, effective and computationally efficient framework and outperforms the state-of-the-arts.
Author Information
Ruibing Hou (Institute of Computing Technology,Chinese Academy)
Hong Chang (Institute of Computing Technology, Chinese Academy of Sciences)
Bingpeng MA (University of Chinese Academy of Sciences)
Shiguang Shan (Chinese Academy of Sciences)
Xilin Chen (Institute of Computing Technology, Chinese Academy of Sciences)
More from the Same Authors
-
2022 Poster: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Understanding Few-Shot Learning: Measuring Task Relatedness and Adaptation Difficulty via Attributes »
Minyang Hu · Hong Chang · Zong Guo · Bingpeng MA · Shiguang Shan · Xilin Chen -
2023 Poster: Glance and Focus: Memory Prompting for Multi-Event Video Question Answering »
Ziyi Bai · Ruiping Wang · Xilin Chen -
2023 Poster: Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation »
Jiachen Liang · RuiBing Hou · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2022 Spotlight: Lightning Talks 3B-4 »
Guanghu Yuan · Yijing Liu · Li Yang · Yongri Piao · Zekang Zhang · Yaxin Xiao · Lin Chen · Hong Chang · Fajie Yuan · Guangyu Gao · Hong Chang · Qinxian Liu · Zhixiang Wei · Qingqing Ye · Chenyang Lu · Jian Meng · Haibo Hu · Xin Jin · Yudong Li · Miao Zhang · Zhiyuan Fang · Jae-sun Seo · Bingpeng MA · Jian-Wei Zhang · Shiguang Shan · Haozhe Feng · Huaian Chen · Deliang Fan · Huadi Zheng · Jianbo Jiao · Huchuan Lu · Beibei Kong · Miao Zheng · Chengfang Fang · Shujie Li · Zhongwei Wang · Yunchao Wei · Xilin Chen · Jie Shi · Kai Chen · Zihan Zhou · Lei Chen · Yi Jin · Wei Chen · Min Yang · Chenyun YU · Bo Hu · Zang Li · Yu Xu · Xiaohu Qie -
2022 Spotlight: Optimal Positive Generation via Latent Transformation for Contrastive Learning »
Hong Chang · Hong Chang · Bingpeng MA · Shiguang Shan · Xilin Chen -
2021 Poster: HRFormer: High-Resolution Vision Transformer for Dense Predict »
YUHUI YUAN · Rao Fu · Lang Huang · Weihong Lin · Chao Zhang · Xilin Chen · Jingdong Wang -
2019 Poster: Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition »
Xuesong Niu · Hu Han · Shiguang Shan · Xilin Chen -
2014 Poster: Generalized Unsupervised Manifold Alignment »
Zhen Cui · Hong Chang · Shiguang Shan · Xilin Chen -
2014 Poster: Self-Paced Learning with Diversity »
Lu Jiang · Deyu Meng · Shoou-I Yu · Zhenzhong Lan · Shiguang Shan · Alexander Hauptmann