Timezone: »

On the Value of Out-of-Distribution Testing: An Example of Goodhart's Law
Damien Teney · Ehsan Abbasnejad · Kushal Kafle · Robik Shrestha · Christopher Kanan · Anton van den Hengel

Tue Dec 08 09:00 AM -- 11:00 AM (PST) @ Poster Session 1 #455

Out-of-distribution (OOD) testing is increasingly popular for evaluating a machine learning system's ability to generalize beyond the biases of a training set. OOD benchmarks are designed to present a different joint distribution of data and labels between training and test time. VQA-CP has become the standard OOD benchmark for visual question answering, but we discovered three troubling practices in its current use. First, most published methods rely on explicit knowledge of the construction of the OOD splits. They often rely on inverting'' the distribution of labels, e.g. answering mostlyyes'' when the common training answer was ``no''. Second, the OOD test set is used for model selection. Third, a model's in-domain performance is assessed after retraining it on in-domain splits (VQA v2) that exhibit a more balanced distribution of labels. These three practices defeat the objective of evaluating generalization, and put into question the value of methods specifically designed for this dataset. We show that embarrassingly-simple methods, including one that generates answers at random, surpass the state of the art on some question types. We provide short- and long-term solutions to avoid these pitfalls and realize the benefits of OOD evaluation.

Author Information

Damien Teney (University of Adelaide)
Ehsan Abbasnejad (University of Adelaide)
Kushal Kafle (Adobe Research)
Robik Shrestha (Rochester Institute of Technology)
Christopher Kanan (PAIGE.AI / RIT / CornellTech)
Anton van den Hengel (University of Adelaide)

More from the Same Authors