Timezone: »
Machine learning advances in the last decade have relied significantly on large-scale datasets that continue to grow in size. Increasingly, those datasets also contain different data modalities. However, large multi-modal datasets are hard to annotate, and annotations may contain biases that we are often unaware of. Deep-net-based classifiers, in turn, are prone to exploit those biases and to find shortcuts. To study and quantify this concern, we introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features, i.e., modalities. Using the perceptual score, we find a surprisingly consistent trend across four popular datasets: recent, more accurate state-of-the-art multi-modal models for visual question-answering or visual dialog tend to perceive the visual data less than their predecessors. This is concerning as answers are hence increasingly inferred from textual cues only. Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions. We hope to spur a discussion on the perceptiveness of multi-modal models and also hope to encourage the community working on multi-modal classifiers to start quantifying perceptiveness via the proposed perceptual score.
Author Information
Itai Gat (Technion)
Idan Schwartz (Technion)
Alex Schwing (University of Illinois at Urbana-Champaign)
More from the Same Authors
-
2021 Spotlight: Per-Pixel Classification is Not All You Need for Semantic Segmentation »
Bowen Cheng · Alex Schwing · Alexander Kirillov -
2022 Poster: Optimizing Relevance Maps of Vision Transformers Improves Robustness »
Hila Chefer · Idan Schwartz · Lior Wolf -
2022 Poster: On the Importance of Gradient Norm in PAC-Bayesian Bounds »
Itai Gat · Yossi Adi · Alex Schwing · Tamir Hazan -
2021 Poster: Bridging the Imitation Gap by Adaptive Insubordination »
Luca Weihs · Unnat Jain · Iou-Jen Liu · Jordi Salvador · Svetlana Lazebnik · Aniruddha Kembhavi · Alex Schwing -
2021 Poster: Per-Pixel Classification is Not All You Need for Semantic Segmentation »
Bowen Cheng · Alex Schwing · Alexander Kirillov -
2021 Poster: A Contrastive Learning Approach for Training Variational Autoencoder Priors »
Jyoti Aneja · Alex Schwing · Jan Kautz · Arash Vahdat -
2021 Poster: Class-agnostic Reconstruction of Dynamic Objects from Videos »
Zhongzheng Ren · Xiaoming Zhao · Alex Schwing -
2020 Poster: Removing Bias in Multi-modal Classifiers: Regularization by Maximizing Functional Entropies »
Itai Gat · Idan Schwartz · Alex Schwing · Tamir Hazan -
2017 Poster: High-Order Attention Models for Visual Question Answering »
Idan Schwartz · Alex Schwing · Tamir Hazan -
2016 Poster: Constraints Based Convex Belief Propagation »
Yaniv Tenzer · Alex Schwing · Kevin Gimpel · Tamir Hazan -
2016 Poster: Learning Deep Parsimonious Representations »
Renjie Liao · Alex Schwing · Richard Zemel · Raquel Urtasun -
2015 Poster: Smooth and Strong: MAP Inference with Linear Convergence »
Ofer Meshi · Mehrdad Mahdavi · Alex Schwing -
2014 Poster: Efficient Inference of Continuous Markov Random Fields with Polynomial Potentials »
Shenlong Wang · Alex Schwing · Raquel Urtasun -
2014 Poster: Message Passing Inference for Large Scale Graphical Models with High Order Potentials »
Jian Zhang · Alex Schwing · Raquel Urtasun -
2013 Poster: Latent Structured Active Learning »
Wenjie Luo · Alex Schwing · Raquel Urtasun -
2012 Poster: Globally Convergent Dual MAP LP Relaxation Solvers using Fenchel-Young Margins »
Alex Schwing · Tamir Hazan · Marc Pollefeys · Raquel Urtasun