Timezone: »
Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benchmark for visual question answering, but we discovered three troubling practices in its current use. First, most published methods rely on explicit knowledge of the construction of the OOD splits. They often rely on inverting'' the distribution of labels, e.g. answering mostly
yes'' when the common training answer was ``no''. Second, the OOD test set is used for model selection. Third, a model's in-domain performance is assessed after retraining it on in-domain splits (VQA v2) that exhibit a more balanced distribution of labels. These three practices defeat the objective of evaluating generalization, and put into question the value of methods specifically designed for this dataset. We show that embarrassingly-simple methods, including one that generates answers at random, surpass the state of the art on some question types. We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation.
Author Information
Damien Teney (University of Adelaide)
Ehsan Abbasnejad (University of Adelaide)
Kushal Kafle (Adobe Research)
Robik Shrestha (Rochester Institute of Technology)
Christopher Kanan (PAIGE.AI / RIT / CornellTech)
Anton van den Hengel (University of Adelaide)
More from the Same Authors
-
2022 : Distributionally Robust Bayesian Optimization with φ-divergences »
Hisham Husain · Vu Nguyen · Anton van den Hengel -
2022 Poster: Truncated Matrix Power Iteration for Differentiable DAG Learning »
Zhen Zhang · Ignavier Ng · Dong Gong · Yuhang Liu · Ehsan Abbasnejad · Mingming Gong · Kun Zhang · Javen Qinfeng Shi -
2020 Poster: Counterfactual Vision-and-Language Navigation: Unravelling the Unseen »
Amin Parvaneh · Ehsan Abbasnejad · Damien Teney · Javen Qinfeng Shi · Anton van den Hengel -
2020 Spotlight: Counterfactual Vision-and-Language Navigation: Unravelling the Unseen »
Amin Parvaneh · Ehsan Abbasnejad · Damien Teney · Javen Qinfeng Shi · Anton van den Hengel -
2018 : Poster Sessions and Lunch (Provided) »
Akira Utsumi · Alane Suhr · Ji Zhang · Ramon Sanabria · Kushal Kafle · Nicholas Chen · Seung Wook Kim · Aishwarya Agrawal · SRI HARSHA DUMPALA · Shikhar Murty · Pablo Azagra · Jean ROUAT · Alaaeldin Ali · · SUBBAREDDY OOTA · Angela Lin · Shruti Palaskar · Farley Lai · Amir Aly · Tingke Shen · Dianqi Li · Jianguo Zhang · Rita Kuznetsova · Jinwon An · Jean-Benoit Delbrouck · Tomasz Kornuta · Syed Ashar Javed · Christopher Davis · John Co-Reyes · Vasu Sharma · Sungwon Lyu · Ning Xie · Ankita Kalra · Huan Ling · Oleksandr Maksymets · Bhavana Mahendra Jain · Shun-Po Chuang · Sanyam Agarwal · Jerome Abdelnour · Yufei Feng · vincent albouy · Siddharth Karamcheti · Derek Doran · Roberta Raileanu · Jonathan Heek -
2015 Poster: Deeply Learning the Messages in Message Passing Inference »
Guosheng Lin · Chunhua Shen · Ian Reid · Anton van den Hengel -
2014 Poster: Encoding High Dimensional Local Features by Sparse Coding Based Fisher Vectors »
Lingqiao Liu · Chunhua Shen · Lei Wang · Anton van den Hengel · Chao Wang -
2009 Poster: Positive Semidefinite Metric Learning with Boosting »
Chunhua Shen · Junae Kim · Lei Wang · Anton van den Hengel