Timezone: »

A Meta-Analysis of Overfitting in Machine Learning
Rebecca Roelofs · Vaishaal Shankar · Benjamin Recht · Sara Fridovich-Keil · Moritz Hardt · John Miller · Ludwig Schmidt

Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #1

We conduct the first large meta-analysis of overfitting due to test set reuse in the machine learning community. Our analysis is based on over one hundred machine learning competitions hosted on the Kaggle platform over the course of several years. In each competition, numerous practitioners repeatedly evaluated their progress against a holdout set that forms the basis of a public ranking available throughout the competition. Performance on a separate test set used only once determined the final ranking. By systematically comparing the public ranking with the final ranking, we assess how much participants adapted to the holdout set over the course of a competition. Our study shows, somewhat surprisingly, little evidence of substantial overfitting. These findings speak to the robustness of the holdout method across different data domains, loss functions, model classes, and human analysts.

Author Information

Becca Roelofs (UC Berkeley)
Vaishaal Shankar (UC Berkeley)
Benjamin Recht (UC Berkeley)
Sara Fridovich-Keil (UC Berkeley)
Moritz Hardt (University of California, Berkeley)
John Miller (University of California, Berkeley)
Ludwig Schmidt (UC Berkeley)

More from the Same Authors