Timezone: »
The vulnerability of deep neural networks (DNNs) to adversarial examples has drawn great attention from the community. In this paper, we study the transferability of such examples, which lays the foundation of many black-box attacks on DNNs. We revisit a not so new but definitely noteworthy hypothesis of Goodfellow et al.'s and disclose that the transferability can be enhanced by improving the linearity of DNNs in an appropriate manner. We introduce linear backpropagation (LinBP), a method that performs backpropagation in a more linear fashion using off-the-shelf attacks that exploit gradients. More specifically, it calculates forward as normal but backpropagates loss as if some nonlinear activations are not encountered in the forward pass. Experimental results demonstrate that this simple yet effective method obviously outperforms current state-of-the-arts in crafting transferable adversarial examples on CIFAR-10 and ImageNet, leading to more effective attacks on a variety of DNNs. Code at: https://github.com/qizhangli/linbp-attack.
Author Information
Yiwen Guo (ByteDance AI Lab)
Qizhang Li (ByteDance AI Lab)
Hao Chen (UC Davis)
More from the Same Authors
-
2021 Spotlight: Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems »
Zixiu Wang · Yiwen Guo · Hu Ding -
2022 Poster: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2022 Spotlight: Lightning Talks 6A-2 »
Yichuan Mo · Botao Yu · Gang Li · Zezhong Xu · Haoran Wei · Arsene Fansi Tchango · Raef Bassily · Haoyu Lu · Qi Zhang · Songming Liu · Mingyu Ding · Peiling Lu · Yifei Wang · Xiang Li · Dongxian Wu · Ping Guo · Wen Zhang · Hao Zhongkai · Mehryar Mohri · Rishab Goel · Yisen Wang · Yifei Wang · Yangguang Zhu · Zhi Wen · Ananda Theertha Suresh · Chengyang Ying · Yujie Wang · Peng Ye · Rui Wang · Nanyi Fei · Hui Chen · Yiwen Guo · Wei Hu · Chenglong Liu · Julien Martel · Yuqi Huo · Wu Yichao · Hang Su · Yisen Wang · Peng Wang · Huajun Chen · Xu Tan · Jun Zhu · Ding Liang · Zhiwu Lu · Joumana Ghosn · Shanshan Zhang · Wei Ye · Ze Cheng · Shikun Zhang · Tao Qin · Tie-Yan Liu -
2022 Spotlight: When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture »
Yichuan Mo · Dongxian Wu · Yifei Wang · Yiwen Guo · Yisen Wang -
2021 Poster: Robust and Fully-Dynamic Coreset for Continuous-and-Bounded Learning (With Outliers) Problems »
Zixiu Wang · Yiwen Guo · Hu Ding -
2020 Poster: Practical No-box Adversarial Attacks against DNNs »
Qizhang Li · Yiwen Guo · Hao Chen -
2019 Poster: DATA: Differentiable ArchiTecture Approximation »
Jianlong Chang · xinbang zhang · Yiwen Guo · GAOFENG MENG · SHIMING XIANG · Chunhong Pan -
2019 Poster: Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks »
Yiwen Guo · Ziang Yan · Changshui Zhang