Timezone: »
Audio-visual source localization is a challenging task that aims to predict the location of visual sound sources in a video. Since collecting ground-truth annotations of sounding objects can be costly, a plethora of weakly-supervised localization methods that can learn from datasets with no bounding-box annotations have been proposed in recent years, by leveraging the natural co-occurrence of audio and visual signals. Despite significant interest, popular evaluation protocols have two major flaws. First, they allow for the use of a fully annotated dataset to perform early stopping, thus significantly increasing the annotation effort required for training. Second, current evaluation metrics assume the presence of sound sources at all times. This is of course an unrealistic assumption, and thus better metrics are necessary to capture the model's performance on (negative) samples with no visible sound sources. To accomplish this, we extend the test set of popular benchmarks, Flickr SoundNet and VGG-Sound Sources, in order to include negative samples, and measure performance using metrics that balance localization accuracy and recall. Using the new protocol, we conducted an extensive evaluation of prior methods, and found that most prior works are not capable of identifying negatives and suffer from significant overfitting problems (rely heavily on early stopping for best results). We also propose a new approach for visual sound source localization that addresses both these problems. In particular, we found that, through extreme visual dropout and the use of momentum encoders, the proposed approach combats overfitting effectively, and establishes a new state-of-the-art performance on both Flickr SoundNet and VGG-Sound Source. Code and pre-trained models are available at https://github.com/stoneMo/SLAVC.
Author Information
Shentong Mo (CMU)
Pedro Morgado (University of Wisconsin - Madison)
More from the Same Authors
-
2021 : Simulated Annealing for Neural Architecture Search »
Shentong Mo · Jingfei Xia · Pinxu Ren -
2021 : Adaptive Fine-tuning for Vision and Language Pre-trained Models »
Shentong Mo · Jingfei Xia · Ihor Markevych -
2021 : Multi-modal Self-supervised Pre-training for Large-scale Genome Data »
Shentong Mo · Xi Fu · Chenyang Hong · Yizhen Chen · Yuxuan Zheng · Xiangru Tang · Yanyan Lan · Zhiqiang Shen · Eric Xing -
2022 : Multispectral Masked Autoencoder for Remote Sensing Representation Learning »
Yibing Wei · Zhicheng Yang · Hang Zhou · Mei Han · Pedro Morgado · Jui-Hsin Lai -
2022 : Hearing Touch: Using Contact Microphones for Robot Manipulation »
Shaden Alshammari · Victoria Dean · Tess Hellebrekers · Pedro Morgado · Abhinav Gupta -
2022 Poster: Learning State-Aware Visual Representations from Audible Interactions »
Himangi Mittal · Pedro Morgado · Unnat Jain · Abhinav Gupta -
2022 Poster: Multi-modal Grouping Network for Weakly-Supervised Audio-Visual Video Parsing »
Shentong Mo · Yapeng Tian -
2021 Workshop: 2nd Workshop on Self-Supervised Learning: Theory and Practice »
Pengtao Xie · Ishan Misra · Pulkit Agrawal · Abdelrahman Mohamed · Shentong Mo · Youwei Liang · Jeannette Bohg · Kristina N Toutanova -
2021 : Poster Session 2 (gather.town) »
Wenjie Li · Akhilesh Soni · Jinwuk Seok · Jianhao Ma · Jeffery Kline · Mathieu Tuli · Miaolan Xie · Robert Gower · Quanqi Hu · Matteo Cacciola · Yuanlu Bai · Boyue Li · Wenhao Zhan · Shentong Mo · Junhyung Lyle Kim · Sajad Fathi Hafshejani · Chris Junchi Li · Zhishuai Guo · Harshvardhan Harshvardhan · Neha Wadia · Tatjana Chavdarova · Difan Zou · Zixiang Chen · Aman Gupta · Jacques Chen · Betty Shea · Benoit Dherin · Aleksandr Beznosikov