Timezone: »
Spotlight
Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations
Amit Daniely · Hadas Shacham
Thu Dec 10 08:10 AM  08:20 AM (PST) @ Orals & Spotlights: Graph/Relational/Theory
We consider ReLU networks with random weights, in which the dimension decreases at each layer.
We show that for most such networks, most examples $x$ admit an adversarial perturbation at an Euclidean distance of $O\left(\frac{\x\}{\sqrt{d}}\right)$, where $d$ is the input dimension. Moreover, this perturbation can be found via gradient flow, as well as gradient descent with sufficiently small steps.
This result can be seen as an explanation to the abundance of adversarial examples, and to the fact that they are found via gradient descent.
Author Information
Amit Daniely (Hebrew University and Google Research)
Hadas Shacham (Hebrew University)
Related Events (a corresponding poster, oral, or spotlight)

2020 Poster: Most ReLU Networks Suffer from $\ell^2$ Adversarial Perturbations »
Thu. Dec 10th 05:00  07:00 PM Room Poster Session 5 #1556
More from the Same Authors

2020 Poster: Neural Networks Learning and Memorization with (almost) no OverParameterization »
Amit Daniely 
2020 Poster: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2020 Poster: Hardness of Learning Neural Networks with Natural Weights »
Amit Daniely · Gal Vardi 
2020 Oral: Learning Parities with Neural Networks »
Amit Daniely · Eran Malach 
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman 
2019 Poster: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2019 Spotlight: Generalization Bounds for Neural Networks via Approximate Description Length »
Amit Daniely · Elad Granot 
2017 Poster: SGD Learns the Conjugate Kernel Class of the Network »
Amit Daniely 
2016 Poster: Toward Deeper Understanding of Neural Networks: The Power of Initialization and a Dual View on Expressivity »
Amit Daniely · Roy Frostig · Yoram Singer 
2013 Poster: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai ShalevShwartz 
2013 Spotlight: More data speeds up training time in learning halfspaces over sparse vectors »
Amit Daniely · Nati Linial · Shai ShalevShwartz 
2012 Poster: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz 
2012 Spotlight: Multiclass Learning Approaches: A Theoretical Comparison with Implications »
Amit Daniely · Sivan Sabato · Shai ShalevShwartz