Timezone: »
Robustness to adversarial attacks was shown to require a larger model capacity, and thus a larger memory footprint. In this paper, we introduce an approach to obtain robust yet compact models by pruning randomly-initialized binary networks. Unlike adversarial training, which learns the model parameters, we initialize the model parameters as either +1 or −1, keep them fixed, and find a subnetwork structure that is robust to attacks. Our method confirms the Strong Lottery Ticket Hypothesis in the presence of adversarial attacks, and extends this to binary networks. Furthermore, it yields more compact networks with competitive performance than existing works by 1) adaptively pruning different network layers; 2) exploiting an effective binary initialization scheme; 3) incorporating a last batch normalization layer to improve training stability. Our experiments demonstrate that our approach not only always outperforms the state-of-the-art robust binary networks, but also can achieve accuracy better than full-precision ones on some datasets. Finally, we show the structured patterns of our pruned binary networks.
Author Information
Chen Liu (City University of Hong Kong)
Ziqi Zhao (École polytechnique fédérale de Lausanne (EPFL))

Ziqi Zhao is currently a master student in EPFL. His research interests are adversarial robustness, network pruning, network quantization and indoor localization.
Sabine Süsstrunk (EPFL)
Mathieu Salzmann (EPFL)
More from the Same Authors
-
2021 : SegmentMeIfYouCan: A Benchmark for Anomaly Segmentation »
Robin Chan · Krzysztof Lis · Svenja Uhlemeyer · Hermann Blum · Sina Honari · Roland Siegwart · Pascal Fua · Mathieu Salzmann · Matthias Rottmann -
2022 Poster: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2022 Spotlight: Lightning Talks 4B-3 »
Zicheng Zhang · Mancheng Meng · Antoine Guedon · Yue Wu · Wei Mao · Zaiyu Huang · Peihao Chen · Shizhe Chen · yongwei chen · Keqiang Sun · Yi Zhu · chen rui · Hanhui Li · Dongyu Ji · Ziyan Wu · miaomiao Liu · Pascal Monasse · Yu Deng · Shangzhe Wu · Pierre-Louis Guhur · Jiaolong Yang · Kunyang Lin · Makarand Tapaswi · Zhaoyang Huang · Terrence Chen · Jiabao Lei · Jianzhuang Liu · Vincent Lepetit · Zhenyu Xie · Richard I Hartley · Dinggang Shen · Xiaodan Liang · Runhao Zeng · Cordelia Schmid · Michael Kampffmeyer · Mathieu Salzmann · Ning Zhang · Fangyun Wei · Yabin Zhang · Fan Yang · Qifeng Chen · Wei Ke · Quan Wang · Thomas Li · qingling Cai · Kui Jia · Ivan Laptev · Mingkui Tan · Xin Tong · Hongsheng Li · Xiaodan Liang · Chuang Gan -
2022 Spotlight: Contact-aware Human Motion Forecasting »
Wei Mao · miaomiao Liu · Richard I Hartley · Mathieu Salzmann -
2021 Poster: Distilling Image Classifiers in Object Detectors »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2021 Poster: Learning Transferable Adversarial Perturbations »
Krishna kanth Nakka · Mathieu Salzmann -
2020 Poster: On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them »
Chen Liu · Mathieu Salzmann · Tao Lin · Ryota Tomioka · Sabine Süsstrunk -
2020 Poster: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2020 Spotlight: ExpandNets: Linear Over-parameterization to Train Compact Convolutional Networks »
Shuxuan Guo · Jose M. Alvarez · Mathieu Salzmann -
2019 Poster: Backpropagation-Friendly Eigendecomposition »
Wei Wang · Zheng Dang · Yinlin Hu · Pascal Fua · Mathieu Salzmann -
2017 Poster: Compression-aware Training of Deep Networks »
Jose Alvarez · Mathieu Salzmann -
2017 Poster: Deep Subspace Clustering Networks »
Pan Ji · Tong Zhang · Hongdong Li · Mathieu Salzmann · Ian Reid -
2016 Poster: Learning the Number of Neurons in Deep Networks »
Jose M. Alvarez · Mathieu Salzmann