Timezone: »
Plotting a learner's average performance against the number of training samples results in a learning curve. Studying such curves on one or more data sets is a way to get to a better understanding of the generalization properties of this learner. The behavior of learning curves is, however, not very well understood and can display (for most researchers) quite unexpected behavior. Our work introduces the formal notion of risk monotonicity, which asks the risk to not deteriorate with increasing training set sizes in expectation over the training samples. We then present the surprising result that various standard learners, specifically those that minimize the empirical risk, can act nonmonotonically irrespective of the training sample size. We provide a theoretical underpinning for specific instantiations from classification, regression, and density estimation. Altogether, the proposed monotonicity notion opens up a whole new direction of research.
Author Information
Marco Loog (Delft University of Technology & University of Copenhagen)
Tom Viering (Delft University of Technology, Netherlands)
Alexander Mey (TU Delft)
More from the Same Authors
-
2023 Poster: Why Did This Model Forecast This Future? Information-Theoretic Saliency for Counterfactual Explanations of Probabilistic Regression Models »
Chirag Raman · Alec Nonnemaker · Amelia Villegas-Morcillo · Hayley Hung · Marco Loog -
2018 Poster: The Pessimistic Limits and Possibilities of Margin-based Losses in Semi-supervised Learning »
Jesse Krijthe · Marco Loog