Timezone: »
"The power of a generalization system follows directly from its biases'" (Mitchell 1980). Today, CNNs are incredibly powerful generalisation systems---but to what degree have we understood how their inductive bias influences model decisions? We here attempt to disentangle the various aspects that determine how a model decides. In particular, we ask: what makes one model decide differently from another? In a meticulously controlled setting, we find that (1.) irrespective of the network architecture or objective (e.g. self-supervised, semi-supervised, vision transformers, recurrent models) all models end up with a similar decision boundary. (2.) To understand these findings, we analysed model decisions on the ImageNet validation set from epoch to epoch and image by image. We find that the ImageNet validation set suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46.3% trivial'' and 11.3%
impossible'' images. Only 42.4% of the images are responsible for the differences between two models' decision boundaries. The impossible images are not driven by label errors. (3.) Finally, humans are highly accurate at predicting which images are trivial'' and
impossible'' for CNNs (81.4%). Taken together, it appears that ImageNet suffers from dichotomous data difficulty. This implies that in future comparisons of brains, machines and behaviour, much may be gained from investigating the decisive role of images and the distribution of their difficulties.
Author Information
Kristof Meding (University of Tübingen)
Luca Schulze Buschoff (University of Tuebingen)
Robert Geirhos (University of Tübingen)
Felix A. Wichmann (University of Tübingen)
More from the Same Authors
-
2021 Spotlight: How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
Roland S. Zimmermann · Judy Borowski · Robert Geirhos · Matthias Bethge · Thomas Wallis · Wieland Brendel -
2021 : Out-of-distribution robustness: Limited image exposure of a four-year-old is enough to outperform ResNet-50 »
Lukas Huber · Robert Geirhos · Felix A. Wichmann -
2021 Poster: How Well do Feature Visualizations Support Causal Understanding of CNN Activations? »
Roland S. Zimmermann · Judy Borowski · Robert Geirhos · Matthias Bethge · Thomas Wallis · Wieland Brendel -
2021 Oral: Partial success in closing the gap between human and machine vision »
Robert Geirhos · Kantharaju Narayanappa · Benjamin Mitzkus · Tizian Thieringer · Matthias Bethge · Felix A. Wichmann · Wieland Brendel -
2021 Poster: Partial success in closing the gap between human and machine vision »
Robert Geirhos · Kantharaju Narayanappa · Benjamin Mitzkus · Tizian Thieringer · Matthias Bethge · Felix A. Wichmann · Wieland Brendel -
2020 Poster: Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency »
Robert Geirhos · Kristof Meding · Felix A. Wichmann -
2019 Poster: Perceiving the arrow of time in autoregressive motion »
Kristof Meding · Dominik Janzing · Bernhard Schölkopf · Felix A. Wichmann -
2019 Spotlight: Perceiving the arrow of time in autoregressive motion »
Kristof Meding · Dominik Janzing · Bernhard Schölkopf · Felix A. Wichmann -
2018 Poster: Generalisation in humans and deep neural networks »
Robert Geirhos · Carlos R. M. Temme · Jonas Rauber · Heiko H. Schütt · Matthias Bethge · Felix A. Wichmann