Timezone: »
Deep learning algorithms are well-known to have a propensity for fitting the training data very well and often fit even outliers and mislabeled data points. Such fitting requires memorization of training data labels, a phenomenon that has attracted significant research interest but has not been given a compelling explanation so far. A recent work of Feldman (2019) proposes a theoretical explanation for this phenomenon based on a combination of two insights. First, natural image and data distributions are (informally) known to be long-tailed, that is have a significant fraction of rare and atypical examples. Second, in a simple theoretical model such memorization is necessary for achieving close-to-optimal generalization error when the data distribution is long-tailed. However, no direct empirical evidence for this explanation or even an approach for obtaining such evidence were given.
In this work we design experiments to test the key ideas in this theory. The experiments require estimation of the influence of each training example on the accuracy at each test example as well as memorization values of training examples. Estimating these quantities directly is computationally prohibitive but we show that closely-related subsampled influence and memorization values can be estimated much more efficiently. Our experiments demonstrate the significant benefits of memorization for generalization on several standard benchmarks. They also provide quantitative and visually compelling evidence for the theory put forth in Feldman (2019).
Author Information
Vitaly Feldman (Apple)
Chiyuan Zhang (Google Brain)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation »
Fri Dec 11th 03:20 -- 03:30 AM Room Orals & Spotlights: Deep Learning
More from the Same Authors
-
2020 Poster: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Spotlight: Stability of Stochastic Gradient Descent on Nonsmooth Convex Losses »
Raef Bassily · Vitaly Feldman · Cristóbal Guzmán · Kunal Talwar -
2020 Poster: What is being transferred in transfer learning? »
Behnam Neyshabur · Hanie Sedghi · Chiyuan Zhang -
2019 Poster: Transfusion: Understanding Transfer Learning for Medical Imaging »
Maithra Raghu · Chiyuan Zhang · Jon Kleinberg · Samy Bengio -
2019 Poster: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta -
2019 Spotlight: Private Stochastic Convex Optimization with Optimal Rates »
Raef Bassily · Vitaly Feldman · Kunal Talwar · Abhradeep Guha Thakurta -
2019 Poster: Locally Private Learning without Interaction Requires Separation »
Amit Daniely · Vitaly Feldman -
2018 Poster: The Everlasting Database: Statistical Validity at a Fair Price »
Blake Woodworth · Vitaly Feldman · Saharon Rosset · Nati Srebro -
2018 Poster: Generalization Bounds for Uniformly Stable Algorithms »
Vitaly Feldman · Jan Vondrak -
2018 Spotlight: Generalization Bounds for Uniformly Stable Algorithms »
Vitaly Feldman · Jan Vondrak -
2016 Workshop: Adaptive Data Analysis »
Vitaly Feldman · Aaditya Ramdas · Aaron Roth · Adam Smith -
2016 Poster: Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back »
Vitaly Feldman -
2016 Oral: Generalization of ERM in Stochastic Convex Optimization: The Dimension Strikes Back »
Vitaly Feldman -
2015 Workshop: Adaptive Data Analysis »
Adam Smith · Aaron Roth · Vitaly Feldman · Moritz Hardt -
2015 Poster: Generalization in Adaptive Data Analysis and Holdout Reuse »
Cynthia Dwork · Vitaly Feldman · Moritz Hardt · Toni Pitassi · Omer Reingold · Aaron Roth -
2015 Poster: Subsampled Power Iteration: a Unified Algorithm for Block Models and Planted CSP's »
Vitaly Feldman · Will Perkins · Santosh Vempala -
2013 Poster: Statistical Active Learning Algorithms »
Maria-Florina F Balcan · Vitaly Feldman