Timezone: »
Existing adversarial learning methods assume the availability of a large amount of data from which we can generate adversarial examples. However, in an adversarial meta-learning setting, the model need to learn transferable robust representations for unseen domains with only a few adversarial examples, which is a very difficult goal to achieve even with a large amount of data. To tackle such a challenge, we propose a novel adversarial self-supervised meta-learning framework with bilevel attacks which aims to learn robust representations that can generalize across tasks and domains. Specifically, in the inner loop, we update the parameters of the given encoder by taking inner gradient steps using two different sets of augmented samples, and generate adversarial examples for each view by maximizing the instance classification loss. Then, in the outer loop, we meta-learn the encoder parameter to maximize the agreement between the two adversarial examples, which enables it to learn robust representations. We experimentally validate the effectiveness of our approach on unseen domain adaptation tasks, on which it achieves impressive performance. Specifically, our method significantly outperforms the state-of-the-art meta-adversarial learning methods on few-shot learning tasks, as well as self-supervised learning baselines in standard learning settings with large-scale datasets.
Author Information
Minseon Kim (KAIST)
Hyeonjeong Ha (KAIST)
Sung Ju Hwang (KAIST, AITRICS)
More from the Same Authors
-
2022 : Distortion-Aware Network Pruning and Feature Reuse for Real-time Video Segmentation »
Hyunsu Rhee · Dongchan Min · Sunil Hwang · Bruno Andreis · Sung Ju Hwang -
2022 : Targeted Adversarial Self-Supervised Learning »
Minseon Kim · Hyeonjeong Ha · Sooel Son · Sung Ju Hwang -
2023 Poster: Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations »
Hyeonjeong Ha · Minseon Kim · Sung Ju Hwang -
2023 Poster: Effective Targeted Attacks for Adversarial Self-Supervised Learning »
Minseon Kim · Hyeonjeong Ha · Sooel Son · Sung Ju Hwang -
2020 Poster: Bootstrapping neural processes »
Juho Lee · Yoonho Lee · Jungtaek Kim · Eunho Yang · Sung Ju Hwang · Yee Whye Teh -
2020 Poster: Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning »
Jaehyung Kim · Youngbum Hur · Sejun Park · Eunho Yang · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction »
Jinheon Baek · Dong Bok Lee · Sung Ju Hwang -
2020 Poster: Time-Reversal Symmetric ODE Network »
In Huh · Eunho Yang · Sung Ju Hwang · Jinwoo Shin -
2020 Poster: Neural Complexity Measures »
Yoonho Lee · Juho Lee · Sung Ju Hwang · Eunho Yang · Seungjin Choi -
2020 Poster: Adversarial Self-Supervised Contrastive Learning »
Minseon Kim · Jihoon Tack · Sung Ju Hwang -
2020 Poster: MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures »
Jeong Un Ryu · JaeWoong Shin · Hae Beom Lee · Sung Ju Hwang -
2020 Spotlight: MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures »
Jeong Un Ryu · JaeWoong Shin · Hae Beom Lee · Sung Ju Hwang -
2020 Poster: Few-shot Visual Reasoning with Meta-Analogical Contrastive Learning »
Youngsung Kim · Jinwoo Shin · Eunho Yang · Sung Ju Hwang -
2020 Poster: Attribution Preservation in Network Compression for Reliable Network Interpretation »
Geondo Park · June Yong Yang · Sung Ju Hwang · Eunho Yang -
2018 Poster: Uncertainty-Aware Attention for Reliable Interpretation and Prediction »
Jay Heo · Hae Beom Lee · Saehoon Kim · Juho Lee · Kwang Joon Kim · Eunho Yang · Sung Ju Hwang -
2018 Poster: Joint Active Feature Acquisition and Classification with Variable-Size Set Encoding »
Hajin Shim · Sung Ju Hwang · Eunho Yang -
2018 Poster: DropMax: Adaptive Variational Softmax »
Hae Beom Lee · Juho Lee · Saehoon Kim · Eunho Yang · Sung Ju Hwang