Timezone: »
Deep neural networks suffer from over-fitting and catastrophic forgetting when trained with small data. One natural remedy for this problem is data augmentation, which has been recently shown to be effective. However, previous works either assume that intra-class variances can always be generalized to new classes, or employ naive generation methods to hallucinate finite examples without modeling their latent distributions. In this work, we propose Covariance-Preserving Adversarial Augmentation Networks to overcome existing limits of low-shot learning. Specifically, a novel Generative Adversarial Network is designed to model the latent distribution of each novel class given its related base counterparts. Since direct estimation on novel classes can be inductively biased, we explicitly preserve covariance information as the ``variability'' of base examples during the generation process. Empirical results show that our model can generate realistic yet diverse examples, leading to substantial improvements on the ImageNet benchmark over the state of the art.
Author Information
Hang Gao (Columbia University)
Zheng Shou (Columbia University)
Alireza Zareian (Columbia University)
Hanwang Zhang (NTU)
Shih-Fu Chang (Columbia University)
More from the Same Authors
-
2021 Spotlight: Self-Supervised Learning Disentangled Group Representation as Feature »
Tan Wang · Zhongqi Yue · Jianqiang Huang · Qianru Sun · Hanwang Zhang -
2022 Poster: Respecting Transfer Gap in Knowledge Distillation »
Yulei Niu · Long Chen · Chang Zhou · Hanwang Zhang -
2022 Poster: Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners »
Zhenhailong Wang · Manling Li · Ruochen Xu · Luowei Zhou · Jie Lei · Xudong Lin · Shuohang Wang · Ziyi Yang · Chenguang Zhu · Derek Hoiem · Shih-Fu Chang · Mohit Bansal · Heng Ji -
2021 Poster: Self-Supervised Learning Disentangled Group Representation as Feature »
Tan Wang · Zhongqi Yue · Jianqiang Huang · Qianru Sun · Hanwang Zhang -
2021 Poster: How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? »
Xinshuai Dong · Anh Tuan Luu · Min Lin · Shuicheng Yan · Hanwang Zhang -
2021 Poster: Introspective Distillation for Robust Question Answering »
Yulei Niu · Hanwang Zhang -
2021 Poster: VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text »
Hassan Akbari · Liangzhe Yuan · Rui Qian · Wei-Hong Chuang · Shih-Fu Chang · Yin Cui · Boqing Gong -
2020 Poster: Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect »
Kaihua Tang · Jianqiang Huang · Hanwang Zhang -
2020 Poster: Causal Intervention for Weakly-Supervised Semantic Segmentation »
Dong Zhang · Hanwang Zhang · Jinhui Tang · Xian-Sheng Hua · Qianru Sun -
2020 Oral: Causal Intervention for Weakly-Supervised Semantic Segmentation »
Dong Zhang · Hanwang Zhang · Jinhui Tang · Xian-Sheng Hua · Qianru Sun -
2020 Poster: Interventional Few-Shot Learning »
Zhongqi Yue · Hanwang Zhang · Qianru Sun · Xian-Sheng Hua