Skip to yearly menu bar Skip to main content


Poster

Cross-validation Confidence Intervals for Test Error

Pierre Bayle · Alexandre Bayle · Lucas Janson · Lester Mackey

Poster Session 1 #190

Keywords: [ Deep Learning; Deep Learning -> Optimization for Deep Networks; Theory ] [ Regularization ] [ Theory ]


Abstract:

This work develops central limit theorems for cross-validation and consistent estimators of the asymptotic variance under weak stability conditions on the learning algorithm. Together, these results provide practical, asymptotically-exact confidence intervals for k-fold test error and valid, powerful hypothesis tests of whether one learning algorithm has smaller k-fold test error than another. These results are also the first of their kind for the popular choice of leave-one-out cross-validation. In our experiments with diverse learning algorithms, the resulting intervals and tests outperform the most popular alternative methods from the literature.

Chat is not available.