Timezone: »
Convolutions are a fundamental building block of modern computer vision systems. Recent approaches have argued for going beyond convolutions in order to capture long-range dependencies. These efforts focus on augmenting convolutional models with content-based interactions, such as self-attention and non-local means, to achieve gains on a number of vision tasks. The natural question that arises is whether attention can be a stand-alone primitive for vision models instead of serving as just an augmentation on top of convolutions. In developing and testing a pure self-attention vision model, we verify that self-attention can indeed be an effective stand-alone layer. A simple procedure of replacing all instances of spatial convolutions with a form of self-attention to ResNet-50 produces a fully self-attentional model that outperforms the baseline on ImageNet classification with 12% fewer FLOPS and 29% fewer parameters. On COCO object detection, a fully self-attention model matches the mAP of a baseline RetinaNet while having 39% fewer FLOPS and 34% fewer parameters. Detailed ablation studies demonstrate that self-attention is especially impactful when used in later layers. These results establish that stand-alone self-attention is an important addition to the vision practitioner's toolbox.
Author Information
Niki Parmar (Google)
Prajit Ramachandran (Google Brain)
Ashish Vaswani (Google Brain)
Irwan Bello (Google Brain)
Anselm Levskaya (Google)
Jon Shlens (Google Research)
More from the Same Authors
-
2020 Poster: RandAugment: Practical Automated Data Augmentation with a Reduced Search Space »
Ekin Dogus Cubuk · Barret Zoph · Jon Shlens · Quoc V Le -
2019 Poster: A Fourier Perspective on Model Robustness in Computer Vision »
Dong Yin · Raphael Gontijo Lopes · Jon Shlens · Ekin Dogus Cubuk · Justin Gilmer -
2018 Poster: Searching for Efficient Multi-Scale Architectures for Dense Image Prediction »
Maxwell Collins · Maxwell Collins · Yukun Zhu · George Papandreou · Barret Zoph · Florian Schroff · Bo Chen · Jon Shlens -
2018 Poster: Mesh-TensorFlow: Deep Learning for Supercomputers »
Noam Shazeer · Youlong Cheng · Niki Parmar · Dustin Tran · Ashish Vaswani · Penporn Koanantakool · Peter Hawkins · HyoukJoong Lee · Mingsheng Hong · Cliff Young · Ryan Sepassi · Blake Hechtman -
2017 Poster: Attention is All you Need »
Ashish Vaswani · Noam Shazeer · Niki Parmar · Jakob Uszkoreit · Llion Jones · Aidan Gomez · Łukasz Kaiser · Illia Polosukhin -
2017 Spotlight: Attention is All you Need »
Ashish Vaswani · Noam Shazeer · Niki Parmar · Jakob Uszkoreit · Llion Jones · Aidan Gomez · Łukasz Kaiser · Illia Polosukhin -
2013 Poster: DeViSE: A Deep Visual-Semantic Embedding Model »
Andrea Frome · Greg Corrado · Jon Shlens · Samy Bengio · Jeff Dean · Marc'Aurelio Ranzato · Tomas Mikolov -
2013 Demonstration: DeViSE: A Deep Visual-Semantic Embedding Model »
Jon Shlens · Andrea Frome