Timezone: »
Poster
Learning sparse features can lead to overfitting in neural networks
Leonardo Petrini · Francesco Cagnetta · Eric Vanden-Eijnden · Matthieu Wyart
It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data. Yet, understanding when and how this feature learning improves performance remains a challenge: for example, it is beneficial for modern architectures trained to classify images, whereas it is detrimental for fully-connected networks trained for the same task on the same data. Here we propose an explanation for this puzzle, by showing that feature learning can perform worse than lazy training (via random feature kernel or the NTK) as the former can lead to a sparser neural representation. Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth along certain directions of input space. We illustrate this phenomenon in two settings: (i) regression of Gaussian random functions on the $d$-dimensional unit sphere and (ii) classification of benchmark datasets of images. For (i), we compute the scaling of the generalization error with number of training points, and show that methods that do not learn features generalize better, even when the dimension of the input space is large. For (ii), we show empirically that learning features can indeed lead to sparse and thereby less smooth representations of the image predictors. This fact is plausibly responsible for deteriorating the performance, which is known to be correlated with smoothness along diffeomorphisms.
Author Information
Leonardo Petrini (EPFL)
Francesco Cagnetta (Swiss Federal Institute of Technology Lausanne)
Eric Vanden-Eijnden (New York University)
Matthieu Wyart (Swiss Federal Institute of Technology Lausanne)
More from the Same Authors
-
2022 Poster: Learning Optimal Flows for Non-Equilibrium Importance Sampling »
Yu Cao · Eric Vanden-Eijnden -
2023 Poster: Efficient Training of Energy-Based Models Using Jarzinsky Equality »
Davide Carbone · Mengjian Hua · Simon Coste · Eric Vanden-Eijnden -
2021 Poster: Locality defeats the curse of dimensionality in convolutional teacher-student scenarios »
Alessandro Favero · Francesco Cagnetta · Matthieu Wyart -
2021 Poster: Relative stability toward diffeomorphisms indicates performance in deep nets »
Leonardo Petrini · Alessandro Favero · Mario Geiger · Matthieu Wyart -
2020 Poster: Optimization and Generalization of Shallow Neural Networks with Quadratic Activation Functions »
Stefano Sarao Mannelli · Eric Vanden-Eijnden · Lenka Zdeborová -
2020 Poster: A Dynamical Central Limit Theorem for Shallow Neural Networks »
Zhengdao Chen · Grant Rotskoff · Joan Bruna · Eric Vanden-Eijnden -
2018 Poster: Parameters as interacting particles: long time convergence and asymptotic error scaling of neural networks »
Grant Rotskoff · Eric Vanden-Eijnden