Timezone: »
A recent line of research has highlighted the existence of a ``double descent'' phenomenon in deep learning, whereby increasing the number of training examples N causes the generalization error of neural networks to peak when N is of the same order as the number of parameters P. In earlier works, a similar phenomenon was shown to exist in simpler models such as linear regression, where the peak instead occurs when N is equal to the input dimension D. Since both peaks coincide with the interpolation threshold, they are often conflated in the litterature. In this paper, we show that despite their apparent similarity, these two scenarios are inherently different. In fact, both peaks can co-exist when neural networks are applied to noisy regression tasks. The relative size of the peaks is then governed by the degree of nonlinearity of the activation function. Building on recent developments in the analysis of random feature models, we provide a theoretical ground for this sample-wise triple descent. As shown previously, the nonlinear peak at N=P is a true divergence caused by the extreme sensitivity of the output function to both the noise corrupting the labels and the initialization of the random features (or the weights in neural networks). This peak survives in the absence of noise, but can be suppressed by regularization. In contrast, the linear peak at N=D is solely due to overfitting the noise in the labels, and forms earlier during training. We show that this peak is implicitly regularized by the nonlinearity, which is why it only becomes salient at high noise and is weakly affected by explicit regularization. Throughout the paper, we compare the analytical results obtained in the random feature model with the outcomes of numerical experiments involving realistic neural networks.
Author Information
Stéphane d'Ascoli (ENS / FAIR)
Currently a joint Ph.D. student between ENS (supervised by Giulio Biroli) and FAIR (supervised by Levent Sagun). Working on theory of deep learning.
Levent Sagun
Giulio Biroli (ENS)
Related Events (a corresponding poster, oral, or spotlight)
-
2020 Spotlight: Triple descent and the two kinds of overfitting: where & why do they appear? »
Wed. Dec 9th 03:20 -- 03:30 PM Room Orals & Spotlights: Deep Learning
More from the Same Authors
-
2022 Poster: End-to-end Symbolic Regression with Transformers »
Pierre-alexandre Kamienny · Stéphane d'Ascoli · Guillaume Lample · Francois Charton -
2021 Poster: On the interplay between data structure and loss function in classification problems »
Stéphane d'Ascoli · Marylou Gabrié · Levent Sagun · Giulio Biroli -
2020 Poster: An analytic theory of shallow networks dynamics for hinge loss classification »
Franco Pellegrini · Giulio Biroli -
2020 Poster: Complex Dynamics in Simple Neural Networks: Understanding Gradient Flow in Phase Retrieval »
Stefano Sarao Mannelli · Giulio Biroli · Chiara Cammarota · Florent Krzakala · Pierfrancesco Urbani · Lenka Zdeborová -
2019 Poster: Finding the Needle in the Haystack with Convolutions: on the benefits of architectural bias »
Stéphane d'Ascoli · Levent Sagun · Giulio Biroli · Joan Bruna -
2019 Poster: Who is Afraid of Big Bad Minima? Analysis of gradient-flow in spiked matrix-tensor models »
Stefano Sarao Mannelli · Giulio Biroli · Chiara Cammarota · Florent Krzakala · Lenka Zdeborová -
2019 Spotlight: Who is Afraid of Big Bad Minima? Analysis of gradient-flow in spiked matrix-tensor models »
Stefano Sarao Mannelli · Giulio Biroli · Chiara Cammarota · Florent Krzakala · Lenka Zdeborová