Timezone: »

Are Vision Transformers Always More Robust Than Convolutional Neural Networks?
Francesco Pinto · Philip Torr · Puneet Dokania
Event URL: https://openreview.net/forum?id=CSXa8LJMttt »

Since Transformer architectures have been popularised in Computer Vision, several papers started analysing their properties in terms of calibration, out-of-distribution detection and data-shift robustness. Most of these papers conclude that Transformers, due to some intrinsic properties (presumably the lack of restrictive inductive biases and the computationally intensive self-attention mechanism), outperform Convolutional Neural Networks (CNNs). In this paper we question this conclusion: in some relevant cases, CNNs, with a pre-training and fine-tuning procedure similar to the one used for transformers, exhibit competitive robustness. To fully understand this behaviour, our evidence suggests that researchers should focus on the interaction between pre-training, fine-tuning and the considered architectures rather than on intrinsic properties of Transformers. For this reason, we present some preliminary analyses that shed some light on the impact of pre-training and fine-tuning on out-of-distribution detection and data-shift.

Author Information

Francesco Pinto (University of Oxford)
Philip Torr (University of Oxford)
Puneet Dokania (University of Oxford)

More from the Same Authors