Timezone: »
Transformers have shown great potential in various computer vision tasks owing to their strong capability in modeling long-range dependency using the self-attention mechanism. Nevertheless, vision transformers treat an image as 1D sequence of visual tokens, lacking an intrinsic inductive bias (IB) in modeling local visual structures and dealing with scale variance. Alternatively, they require large-scale training data and longer training schedules to learn the IB implicitly. In this paper, we propose a new Vision Transformer Advanced by Exploring intrinsic IB from convolutions, i.e., ViTAE. Technically, ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context by using multiple convolutions with different dilation rates. In this way, it acquires an intrinsic scale invariance IB and is able to learn robust feature representation for objects at various scales. Moreover, in each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network. Consequently, it has the intrinsic locality IB and is able to learn local features and global dependencies collaboratively. Experiments on ImageNet as well as downstream tasks prove the superiority of ViTAE over the baseline transformer and concurrent works. Source code and pretrained models will be available at https://github.com/Annbless/ViTAE.
Author Information
Yufei Xu (The University of Sydney, University of Sydney)
Qiming ZHANG (University of Sydney)
Jing Zhang (The University of Sydney)
Dacheng Tao (University of Technology, Sydney)
More from the Same Authors
-
2021 : AP-10K: A Benchmark for Animal Pose Estimation in the Wild »
Hang Yu · Yufei Xu · Jing Zhang · Wei Zhao · Ziyu Guan · Dacheng Tao -
2022 Poster: Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation Approach »
Peng Mi · Li Shen · Tianhe Ren · Yiyi Zhou · Xiaoshuai Sun · Rongrong Ji · Dacheng Tao -
2022 Poster: ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation »
Yufei Xu · Jing Zhang · Qiming ZHANG · Dacheng Tao -
2022 Poster: APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking »
Yuxiang Yang · Junjie Yang · Yufei Xu · Jing Zhang · Long Lan · Dacheng Tao -
2022 Spotlight: Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits »
Kaining Zhang · Liu Liu · Min-Hsiu Hsieh · Dacheng Tao -
2022 Spotlight: Lightning Talks 4B-4 »
Ziyue Jiang · Zeeshan Khan · Yuxiang Yang · Chenze Shao · Yichong Leng · Zehao Yu · Wenguan Wang · Xian Liu · Zehua Chen · Yang Feng · Qianyi Wu · James Liang · C.V. Jawahar · Junjie Yang · Zhe Su · Songyou Peng · Yufei Xu · Junliang Guo · Michael Niemeyer · Hang Zhou · Zhou Zhao · Makarand Tapaswi · Dongfang Liu · Qian Yang · Torsten Sattler · Yuanqi Du · Haohe Liu · Jing Zhang · Andreas Geiger · Yi Ren · Long Lan · Jiawei Chen · Wayne Wu · Dahua Lin · Dacheng Tao · Xu Tan · Jinglin Liu · Ziwei Liu · 振辉 叶 · Danilo Mandic · Lei He · Xiangyang Li · Tao Qin · sheng zhao · Tie-Yan Liu -
2022 Spotlight: APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking »
Yuxiang Yang · Junjie Yang · Yufei Xu · Jing Zhang · Long Lan · Dacheng Tao -
2022 Spotlight: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2022 Poster: Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnable Classifier at the End of Deep Neural Network? »
Yibo Yang · Shixiang Chen · Xiangtai Li · Liang Xie · Zhouchen Lin · Dacheng Tao -
2022 Poster: CGLB: Benchmark Tasks for Continual Graph Learning »
Xikun Zhang · Dongjin Song · Dacheng Tao -
2022 Poster: Escaping from the Barren Plateau via Gaussian Initializations in Deep Variational Quantum Circuits »
Kaining Zhang · Liu Liu · Min-Hsiu Hsieh · Dacheng Tao -
2022 Poster: Benefits of Permutation-Equivariance in Auction Mechanisms »
Tian Qin · Fengxiang He · Dingfeng Shi · Wenbing Huang · Dacheng Tao -
2022 Poster: Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach »
Kaiwen Yang · Yanchao Sun · Jiahao Su · Fengxiang He · Xinmei Tian · Furong Huang · Tianyi Zhou · Dacheng Tao -
2021 Poster: Class-Disentanglement and Applications in Adversarial Detection and Defense »
Kaiwen Yang · Tianyi Zhou · Yonggang Zhang · Xinmei Tian · Dacheng Tao -
2021 Poster: Gauge Equivariant Transformer »
Lingshen He · Yiming Dong · Yisen Wang · Dacheng Tao · Zhouchen Lin -
2020 Poster: Auto Learning Attention »
Benteng Ma · Jing Zhang · Yong Xia · Dacheng Tao -
2019 Poster: Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation »
Qiming ZHANG · Jing Zhang · Wei Liu · Dacheng Tao -
2019 Poster: Learn, Imagine and Create: Text-to-Image Generation from Prior Knowledge »
Tingting Qiao · Jing Zhang · Duanqing Xu · Dacheng Tao -
2018 Poster: Learning Versatile Filters for Efficient Convolutional Neural Networks »
Yunhe Wang · Chang Xu · Chunjing XU · Chao Xu · Dacheng Tao