Timezone: »

 
Poster
Gradient Methods Provably Converge to Non-Robust Networks
Gal Vardi · Gilad Yehudai · Ohad Shamir

Tue Nov 29 09:00 AM -- 11:00 AM (PST) @ Hall J #723
Despite a great deal of research, it is still unclear why neural networks are so susceptible to adversarial examples. In this work, we identify natural settings where depth-$2$ ReLU networks trained with gradient flow are provably non-robust (susceptible to small adversarial $\ell_2$-perturbations), even when robust networks that classify the training dataset correctly exist.Perhaps surprisingly, we show that the well-known implicit bias towards margin maximization induces bias towards non-robust networks, by proving that every network which satisfies the KKT conditions of the max-margin problem is non-robust.

Author Information

Gal Vardi (TTI-Chicago)
Gilad Yehudai (Weizmann Institute of Technology)
Ohad Shamir (Weizmann Institute of Science)

More from the Same Authors