Timezone: »
Intermediate-level attacks that attempt to perturb feature representations following an adversarial direction drastically have shown favorable performance in crafting transferable adversarial examples. Existing methods in this category are normally formulated with two separate stages, where a directional guide is required to be determined at first and the scalar projection of the intermediate-level perturbation onto the directional guide is enlarged thereafter. The obtained perturbation deviates from the guide inevitably in the feature space, and it is revealed in this paper that such a deviation may lead to sub-optimal attack. To address this issue, we develop a novel intermediate-level method that crafts adversarial examples within a single stage of optimization. In particular, the proposed method, named intermediate-level perturbation decay (ILPD), encourages the intermediate-level perturbation to be in an effective adversarial direction and to possess a great magnitude simultaneously. In-depth discussion verifies the effectiveness of our method. Experimental results show that it outperforms state-of-the-arts by large margins in attacking various victim models on ImageNet (+10.07% on average) and CIFAR-10 (+3.88% on average). Our code is at https://github.com/qizhangli/ILPD-attack.
Author Information
Qizhang Li (Harbin Institute of Technology)
Yiwen Guo (Unaffliated)
Wangmeng Zuo (Harbin Institute of Technology)
Hao Chen (UC Davis)
More from the Same Authors
-
2021 Spotlight: Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems »
Zixiu Wang · Yiwen Guo · Hu Ding -
2022 Poster: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2023 Poster: Adversarial Examples Are Not Real Features »
Ang Li · Yifei Wang · Yiwen Guo · Yisen Wang -
2023 Poster: Towards Evaluating Transfer-based Attacks Systematically, Practically, and Fairly »
Qizhang Li · Yiwen Guo · Wangmeng Zuo · Hao Chen -
2022 Spotlight: Lightning Talks 6A-2 »
Yichuan Mo · Botao Yu · Gang Li · Zezhong Xu · Haoran Wei · Arsene Fansi Tchango · Raef Bassily · Haoyu Lu · Qi Zhang · Songming Liu · Mingyu Ding · Peiling Lu · Yifei Wang · Xiang Li · Dongxian Wu · Ping Guo · Wen Zhang · Hao Zhongkai · Mehryar Mohri · Rishab Goel · Yisen Wang · Yifei Wang · Yangguang Zhu · Zhi Wen · Ananda Theertha Suresh · Chengyang Ying · Yujie Wang · Peng Ye · Rui Wang · Nanyi Fei · Hui Chen · Yiwen Guo · Wei Hu · Chenglong Liu · Julien Martel · Yuqi Huo · Wu Yichao · Hang Su · Yisen Wang · Peng Wang · Huajun Chen · Xu Tan · Jun Zhu · Ding Liang · Zhiwu Lu · Joumana Ghosn · Shanshan Zhang · Wei Ye · Ze Cheng · Shikun Zhang · Tao Qin · Tie-Yan Liu -
2022 Spotlight: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2021 Poster: Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems »
Zixiu Wang · Yiwen Guo · Hu Ding -
2020 Poster: Backpropagating Linearly Improves Transferability of Adversarial Examples »
Yiwen Guo · Qizhang Li · Hao Chen -
2020 Poster: Practical No-box Adversarial Attacks against DNNs »
Qizhang Li · Yiwen Guo · Hao Chen -
2020 Poster: Cross-Scale Internal Graph Neural Network for Image Super-Resolution »
Shangchen Zhou · Jiawei Zhang · Wangmeng Zuo · Chen Change Loy -
2019 Poster: DATA: Differentiable ArchiTecture Approximation »
Jianlong Chang · xinbang zhang · Yiwen Guo · GAOFENG MENG · SHIMING XIANG · Chunhong Pan -
2019 Poster: Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks »
Yiwen Guo · Ziang Yan · Changshui Zhang -
2018 Poster: Global Gated Mixture of Second-order Pooling for Improving Deep Convolutional Neural Networks »
Qilong Wang · Zilin Gao · Jiangtao Xie · Wangmeng Zuo · Peihua Li