Timezone: »
Despite incredible advances, deep learning has been shown to be susceptible to adversarial attacks. Numerous approaches were proposed to train robust networks both empirically and certifiably. However, most of them defend against only a single type of attack, while recent work steps forward at defending against multiple attacks. In this paper, to understand multi-target robustness, we view this problem as a bargaining game in which different players (adversaries) negotiate to reach an agreement on a joint direction of parameter updating. We identify a phenomenon named \emph{player domination} in the bargaining game, and show that with this phenomenon, some of the existing max-based approaches such as MAX and MSD do not converge. Based on our theoretical results, we design a novel framework that adjusts the budgets of different adversaries to avoid player domination. Experiments on two benchmarks show that employing the proposed framework to the existing approaches significantly advances multi-target robustness.
Author Information
Yimu Wang (University of Waterloo)
Dinghuai Zhang (Mila)
Yihan Wu (University of Pittsburgh)
Heng Huang (University of Pittsburgh)
Hongyang Zhang (School of Computer Science, University of Waterloo)
More from the Same Authors
-
2021 Spotlight: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2022 : FedGRec: Federated Graph Recommender System with Lazy Update of Latent Embeddings »
Junyi Li · Heng Huang -
2022 Poster: Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness »
Avrim Blum · Omar Montasser · Greg Shakhnarovich · Hongyang Zhang -
2022 Poster: MetricFormer: A Unified Perspective of Correlation Exploring in Similarity Learning »
Jiexi Yan · Erkun Yang · Cheng Deng · Heng Huang -
2022 Poster: Enhanced Bilevel Optimization via Bregman Distance »
Feihu Huang · Junyi Li · Shangqian Gao · Heng Huang -
2021 Poster: Optimal Underdamped Langevin MCMC Method »
Zhengmian Hu · Feihu Huang · Heng Huang -
2021 Poster: Fast Training Method for Stochastic Compositional Optimization Problems »
Hongchang Gao · Heng Huang -
2021 Poster: SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients »
Feihu Huang · Junyi Li · Heng Huang -
2021 Poster: Efficient Mirror Descent Ascent Methods for Nonsmooth Minimax Problems »
Feihu Huang · Xidong Wu · Heng Huang -
2021 Poster: A Faster Decentralized Algorithm for Nonconvex Minimax Problems »
Wenhan Xian · Feihu Huang · Yanfu Zhang · Heng Huang -
2021 Poster: Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization »
Kartik Ahuja · Ethan Caballero · Dinghuai Zhang · Jean-Christophe Gagnon-Audet · Yoshua Bengio · Ioannis Mitliagkas · Irina Rish -
2020 Poster: Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework »
Dinghuai Zhang · Mao Ye · Chengyue Gong · Zhanxing Zhu · Qiang Liu -
2019 Poster: Curvilinear Distance Metric Learning »
Shuo Chen · Lei Luo · Jian Yang · Chen Gong · Jun Li · Heng Huang -
2018 Poster: Bilevel Distance Metric Learning for Robust Image Recognition »
Jie Xu · Lei Luo · Cheng Deng · Heng Huang -
2018 Poster: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2018 Spotlight: Training Neural Networks Using Features Replay »
Zhouyuan Huo · Bin Gu · Heng Huang -
2017 Poster: Group Sparse Additive Machine »
Hong Chen · Xiaoqian Wang · Cheng Deng · Heng Huang -
2017 Poster: Regularized Modal Regression with Applications in Cognitive Impairment Prediction »
Xiaoqian Wang · Hong Chen · Weidong Cai · Dinggang Shen · Heng Huang -
2017 Poster: Learning A Structured Optimal Bipartite Graph for Co-Clustering »
Feiping Nie · Xiaoqian Wang · Cheng Deng · Heng Huang