Timezone: »
Despite strong performance in numerous applications, the fragility of deep learning to input perturbations has raised serious questions about its use in safety-critical domains. While adversarial training can mitigate this issue in practice, state-of-the-art methods are increasingly application-dependent, heuristic in nature, and suffer from fundamental trade-offs between nominal performance and robustness. Moreover, the problem of finding worst-case perturbations is non-convex and underparameterized, both of which engender a non-favorable optimization landscape. Thus, there is a gap between the theory and practice of robust learning, particularly with respect to when and why adversarial training works. In this paper, we take a constrained learning approach to address these questions and to provide a theoretical foundation for robust learning. In particular, we leverage semi-infinite optimization and non-convex duality theory to show that adversarial training is equivalent to a statistical problem over perturbation distributions. Notably, we show that a myriad of previous robust training techniques can be recovered for particular, sub-optimal choices of these distributions. Using these insights, we then propose a hybrid Langevin Markov Chain Monte Carlo approach for which several common algorithms (e.g., PGD) are special cases. Finally, we show that our approach can mitigate the trade-off between nominal and robust performance, yielding state-of-the-art results on MNIST and CIFAR-10. Our code is available at: https://github.com/arobey1/advbench.
Author Information
Alexander Robey (University of Pennsylvania)
Luiz Chamon (University of Pennsylvania)
George J. Pappas (University of Pennsylvania)
George J. Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering at the University of Pennsylvania. He also holds a secondary appointment in the Departments of Computer and Information Sciences, and Mechanical Engineering and Applied Mechanics. He is member of the GRASP Lab and the PRECISE Center. He has previously served as the Deputy Dean for Research in the School of Engineering and Applied Science. His research focuses on control theory and in particular, hybrid systems, embedded systems, hierarchical and distributed control systems, with applications to unmanned aerial vehicles, distributed robotics, green buildings, and biomolecular networks. He is a Fellow of IEEE, and has received various awards such as the Antonio Ruberti Young Researcher Prize, the George S. Axelby Award, the O. Hugo Schuck Best Paper Award, the National Science Foundation PECASE, and the George H. Heilmeier Faculty Excellence Award.
Hamed Hassani (ETH Zurich)
Alejandro Ribeiro (University of Pennsylvania)
More from the Same Authors
-
2021 : State Augmented Constrained Reinforcement Learning: Overcoming the Limitations of Learning with Rewards »
Miguel Calvo-Fullana · Santiago Paternain · Alejandro Ribeiro -
2022 : Convolutional Neural Networks on Manifolds: From Graphs and Back »
Zhiyang Wang · Luana Ruiz · Alejandro Ribeiro -
2022 Spotlight: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: NOMAD: Nonlinear Manifold Decoders for Operator Learning »
Jacob Seidman · Georgios Kissas · Paris Perdikaris · George J. Pappas -
2022 Poster: Learning Operators with Coupled Attention »
Georgios Kissas · Jacob Seidman · Leonardo Ferreira Guilhoto · Victor M. Preciado · George J. Pappas · Paris Perdikaris -
2022 Poster: A Lagrangian Duality Approach to Active Learning »
Juan Elenter · Navid Naderializadeh · Alejandro Ribeiro -
2022 Poster: Probable Domain Generalization via Quantile Risk Minimization »
Cian Eastwood · Alexander Robey · Shashank Singh · Julius von Kügelgen · Hamed Hassani · George J. Pappas · Bernhard Schölkopf -
2022 Poster: coVariance Neural Networks »
Saurabh Sihag · Gonzalo Mateos · Corey McMillan · Alejandro Ribeiro -
2022 Poster: Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret Bounds »
Aritra Mitra · Arman Adibi · George J. Pappas · Hamed Hassani -
2021 Poster: Linear Convergence in Federated Learning: Tackling Client Heterogeneity and Sparse Gradients »
Aritra Mitra · Rayana Jaafar · George J. Pappas · Hamed Hassani -
2021 Poster: Model-Based Domain Generalization »
Alexander Robey · George J. Pappas · Hamed Hassani -
2021 Poster: Safe Pontryagin Differentiable Programming »
Wanxin Jin · Shaoshuai Mou · George J. Pappas -
2020 Poster: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Poster: Sinkhorn Barycenter via Functional Gradient Descent »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Spotlight: Sinkhorn Natural Gradient for Generative Models »
Zebang Shen · Zhenfu Wang · Alejandro Ribeiro · Hamed Hassani -
2020 Poster: Graphon Neural Networks and the Transferability of Graph Neural Networks »
Luana Ruiz · Luiz Chamon · Alejandro Ribeiro -
2020 Poster: Probably Approximately Correct Constrained Learning »
Luiz Chamon · Alejandro Ribeiro -
2019 : Poster and Coffee Break 1 »
Aaron Sidford · Aditya Mahajan · Alejandro Ribeiro · Alex Lewandowski · Ali H Sayed · Ambuj Tewari · Angelika Steger · Anima Anandkumar · Asier Mujika · Hilbert J Kappen · Bolei Zhou · Byron Boots · Chelsea Finn · Chen-Yu Wei · Chi Jin · Ching-An Cheng · Christina Yu · Clement Gehring · Craig Boutilier · Dahua Lin · Daniel McNamee · Daniel Russo · David Brandfonbrener · Denny Zhou · Devesh Jha · Diego Romeres · Doina Precup · Dominik Thalmeier · Eduard Gorbunov · Elad Hazan · Elena Smirnova · Elvis Dohmatob · Emma Brunskill · Enrique Munoz de Cote · Ethan Waldie · Florian Meier · Florian Schaefer · Ge Liu · Gergely Neu · Haim Kaplan · Hao Sun · Hengshuai Yao · Jalaj Bhandari · James A Preiss · Jayakumar Subramanian · Jiajin Li · Jieping Ye · Jimmy Smith · Joan Bas Serrano · Joan Bruna · John Langford · Jonathan Lee · Jose A. Arjona-Medina · Kaiqing Zhang · Karan Singh · Yuping Luo · Zafarali Ahmed · Zaiwei Chen · Zhaoran Wang · Zhizhong Li · Zhuoran Yang · Ziping Xu · Ziyang Tang · Yi Mao · David Brandfonbrener · Shirli Di-Castro · Riashat Islam · Zuyue Fu · Abhishek Naik · Saurabh Kumar · Benjamin Petit · Angeliki Kamoutsi · Simone Totaro · Arvind Raghunathan · Rui Wu · Donghwan Lee · Dongsheng Ding · Alec Koppel · Hao Sun · Christian Tjandraatmadja · Mahdi Karami · Jincheng Mei · Chenjun Xiao · Junfeng Wen · Zichen Zhang · Ross Goroshin · Mohammad Pezeshki · Jiaqi Zhai · Philip Amortila · Shuo Huang · Mariya Vasileva · El houcine Bergou · Adel Ahmadyan · Haoran Sun · Sheng Zhang · Lukas Gruber · Yuanhao Wang · Tetiana Parshakova -
2019 Poster: Constrained Reinforcement Learning Has Zero Duality Gap »
Santiago Paternain · Luiz Chamon · Miguel Calvo-Fullana · Alejandro Ribeiro -
2019 Poster: Stability of Graph Scattering Transforms »
Fernando Gama · Alejandro Ribeiro · Joan Bruna -
2019 Poster: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2019 Spotlight: Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks »
Mahyar Fazlyab · Alexander Robey · Hamed Hassani · Manfred Morari · George J. Pappas -
2017 Poster: Approximate Supermodularity Bounds for Experimental Design »
Luiz Chamon · Alejandro Ribeiro -
2017 Poster: First-Order Adaptive Sample Size Methods to Reduce Complexity of Empirical Risk Minimization »
Aryan Mokhtari · Alejandro Ribeiro -
2016 Poster: Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy »
Aryan Mokhtari · Hadi Daneshmand · Aurelien Lucchi · Thomas Hofmann · Alejandro Ribeiro -
2016 Poster: Fast and Provably Good Seedings for k-Means »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2016 Oral: Fast and Provably Good Seedings for k-Means »
Olivier Bachem · Mario Lucic · Hamed Hassani · Andreas Krause -
2015 Poster: Sampling from Probabilistic Submodular Models »
Alkis Gotovos · Hamed Hassani · Andreas Krause -
2015 Oral: Sampling from Probabilistic Submodular Models »
Alkis Gotovos · Hamed Hassani · Andreas Krause