Timezone: »
The success of adversarial attacks and the performance tradeoffs made by adversarial defense methods have both traditionally been evaluated on image test sets constructed from a randomly sampled held out portion of a training set. Mayo 2022 et al. [1] measured the difficulty of the ImageNet and ObjectNet test sets by measuring the minimum viewing time required for an object to be recognized on average by a human, finding that these test sets are heavily skewed towards containing mostly easy, quickly recognized images. While difficult images that require longer viewing times to be recognized are uncommon in test sets, they are both common and critically important to the real world performance of vision models. In this work, we investigated the relationship between adversarial robustness and viewing time difficulty. Measuring the AUC of accuracy vs attack strength (epsilon), we find that easy, quickly recognized, images are more robust to adversarial attacks than difficult images, which require several seconds of viewing time to recognize. Additionally, adversarial defense methods improve models robustness to adversarial attacks on easy images significantly more than on hard images. We propose that the distribution of image difficulties should be carefully considered and controlled for when measuring both the effectiveness of adversarial attacks and when analyzing the clean accuracy vs robustness tradeoff made by adversarial defense methods.
Author Information
David Mayo (MIT)
Jesse Cummings (MIT)
Xinyu Lin (MIT)
Boris Katz (MIT)
Andrei Barbu (MIT)
More from the Same Authors
-
2021 : Towards Incorporating Rich Social Interactions Into MDPs »
Ravi Tejwani · Yen-Ling Kuo · Tianmin Shu · Bennett Stankovits · Dan Gutfreund · Josh Tenenbaum · Boris Katz · Andrei Barbu -
2022 : Neural Network Online Training with Sensitivity to Multiscale Temporal Structure »
Matt Jones · Tyler Scott · Gamaleldin Elsayed · Mengye Ren · Katherine Hermann · David Mayo · Michael Mozer -
2022 : Workshop version: How hard are computer vision datasets? Calibrating dataset difficulty to viewing time »
David Mayo · Jesse Cummings · Xinyu Lin · Dan Gutfreund · Boris Katz · Andrei Barbu -
2023 Poster: How hard are computer vision datasets? Calibrating dataset difficulty to viewing time »
David Mayo · Jesse Cummings · Xinyu Lin · Dan Gutfreund · Boris Katz · Andrei Barbu -
2021 : On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation »
Binxu Wang · David Mayo · Arturo Deza · Andrei Barbu · Colin Conwell -
2021 Poster: Neural Regression, Representational Similarity, Model Zoology & Neural Taskonomy at Scale in Rodent Visual Cortex »
Colin Conwell · David Mayo · Andrei Barbu · Michael Buice · George Alvarez · Boris Katz -
2019 : Making the next generation of machine learning datasets: ObjectNet a new object recognition benchmark »
Andrei Barbu -
2019 Poster: ObjectNet: A large-scale bias-controlled dataset for pushing the limits of object recognition models »
Andrei Barbu · David Mayo · Julian Alverio · William Luo · Christopher Wang · Dan Gutfreund · Josh Tenenbaum · Boris Katz