Timezone: »
Throughout the past five years, the susceptibility of neural networks to minimal adversarial perturbations has moved from a peculiar phenomenon to a core issue in Deep Learning. Despite much attention, however, progress towards more robust models is significantly impaired by the difficulty of evaluating the robustness of neural network models. Today's methods are either fast but brittle (gradient-based attacks), or they are fairly reliable but slow (score- and decision-based attacks). We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning. These findings are carefully validated across a diverse set of six different models and hold for L0, L1, L2 and Linf in both targeted as well as untargeted scenarios. Implementations will soon be available in all major toolboxes (Foolbox, CleverHans and ART). We hope that this class of attacks will make robustness evaluations easier and more reliable, thus contributing to more signal in the search for more robust machine learning models.
Author Information
Wieland Brendel (AG Bethge, University of Tübingen)
Jonas Rauber (University of Tübingen)
Matthias Kümmerer (University of Tübingen)
Ivan Ustyuzhaninov (University of Tübingen)
Matthias Bethge (University of Tübingen)
More from the Same Authors
-
2020 Poster: System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina »
Cornelius Schröder · David Klindt · Sarah Strauss · Katrin Franke · Matthias Bethge · Thomas Euler · Philipp Berens -
2020 Spotlight: System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina »
Cornelius Schröder · David Klindt · Sarah Strauss · Katrin Franke · Matthias Bethge · Thomas Euler · Philipp Berens -
2020 Poster: Improving robustness against common corruptions by covariate shift adaptation »
Steffen Schneider · Evgenia Rusak · Luisa Eck · Oliver Bringmann · Wieland Brendel · Matthias Bethge -
2019 Poster: Learning from brains how to regularize machines »
Zhe Li · Wieland Brendel · Edgar Walker · Erick Cobos · Taliah Muhammad · Jacob Reimer · Matthias Bethge · Fabian Sinz · Xaq Pitkow · Andreas Tolias -
2018 Poster: Generalisation in humans and deep neural networks »
Robert Geirhos · Carlos R. M. Temme · Jonas Rauber · Heiko H. Schütt · Matthias Bethge · Felix A. Wichmann -
2017 Poster: Neural system identification for large populations separating “what” and “where” »
David Klindt · Alexander Ecker · Thomas Euler · Matthias Bethge -
2015 Poster: Texture Synthesis Using Convolutional Neural Networks »
Leon A Gatys · Alexander Ecker · Matthias Bethge -
2015 Poster: Generative Image Modeling Using Spatial LSTMs »
Lucas Theis · Matthias Bethge -
2012 Poster: Training sparse natural image models with a fast Gibbs sampler of an extended state space »
Lucas Theis · Jascha Sohl-Dickstein · Matthias Bethge -
2010 Poster: Evaluating neuronal codes for inference using Fisher information »
Ralf Haefner · Matthias Bethge -
2009 Poster: Hierarchical Modeling of Local Image Features through $L_p$-Nested Symmetric Distributions »
Fabian H Sinz · Eero Simoncelli · Matthias Bethge -
2009 Poster: Neurometric function analysis of population codes »
Philipp Berens · Sebastian Gerwinn · Alexander S Ecker · Matthias Bethge -
2009 Poster: A joint maximum-entropy model for binary neural population patterns and continuous signals »
Sebastian Gerwinn · Philipp Berens · Matthias Bethge -
2009 Spotlight: A joint maximum-entropy model for binary neural population patterns and continuous signals »
Sebastian Gerwinn · Philipp Berens · Matthias Bethge -
2009 Poster: Bayesian estimation of orientation preference maps »
Jakob H Macke · Sebastian Gerwinn · Leonard White · Matthias Kaschube · Matthias Bethge -
2008 Poster: The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction »
Fabian H Sinz · Matthias Bethge -
2008 Spotlight: The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction »
Fabian H Sinz · Matthias Bethge -
2007 Oral: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge -
2007 Spotlight: Near-Maximum Entropy Models for Binary Neural Representations of Natural Images »
Matthias Bethge · Philipp Berens -
2007 Poster: Near-Maximum Entropy Models for Binary Neural Representations of Natural Images »
Matthias Bethge · Philipp Berens -
2007 Poster: Bayesian Inference for Spiking Neuron Models with a Sparsity Prior »
Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge -
2007 Poster: Receptive Fields without Spike-Triggering »
Jakob H Macke · Günther Zeck · Matthias Bethge