Timezone: »
We present a framework for learning multimodal representations from unlabeled data using convolution-free Transformer architectures. Specifically, our Video-Audio-Text Transformer (VATT) takes raw signals as inputs and extracts multimodal representations that are rich enough to benefit a variety of downstream tasks. We train VATT end-to-end from scratch using multimodal contrastive losses and evaluate its performance by the downstream tasks of video action recognition, audio event classification, image classification, and text-to-video retrieval. Furthermore, we study a modality-agnostic single-backbone Transformer by sharing weights among the three modalities. We show that the convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks. Especially, VATT's vision Transformer achieves the top-1 accuracy of 82.1% on Kinetics-400, 83.6% on Kinetics-600, 72.7% on Kinetics-700, and 41.1% on Moments in Time, new records while avoiding supervised pre-training. Transferring to image classification leads to 78.7% top-1 accuracy on ImageNet compared to 64.7% by training the same Transformer from scratch, showing the generalizability of our model despite the domain gap between videos and images. VATT's audio Transformer also sets a new record on waveform-based audio event recognition by achieving the mAP of 39.4% on AudioSet without any supervised pre-training.
Author Information
Hassan Akbari (Google)
Liangzhe Yuan (School of Engineering and Applied Science, University of Pennsylvania)
Rui Qian (Cornell University)
Wei-Hong Chuang (Google)
Shih-Fu Chang (Columbia University)
Yin Cui (Cornell University)
Boqing Gong (Tencent AI Lab)
More from the Same Authors
-
2022 Poster: Language Models with Image Descriptors are Strong Few-Shot Video-Language Learners »
Zhenhailong Wang · Manling Li · Ruochen Xu · Luowei Zhou · Jie Lei · Xudong Lin · Shuohang Wang · Ziyi Yang · Chenguang Zhu · Derek Hoiem · Shih-Fu Chang · Mohit Bansal · Heng Ji -
2022 Poster: Scaling Multimodal Pre-Training via Cross-Modality Gradient Harmonization »
Junru Wu · Yi Liang · feng han · Hassan Akbari · Zhangyang Wang · Cong Yu -
2021 Poster: On Model Calibration for Long-Tailed Object Detection and Instance Segmentation »
Tai-Yu Pan · Cheng Zhang · Yandong Li · Hexiang Hu · Dong Xuan · Soravit Changpinyo · Boqing Gong · Wei-Lun Chao -
2018 Poster: Low-shot Learning via Covariance-Preserving Adversarial Augmentation Networks »
Hang Gao · Zheng Shou · Alireza Zareian · Hanwang Zhang · Shih-Fu Chang