Timezone: »
Recent empirical work has shown that hierarchical convolutional kernels inspired by convolutional neural networks (CNNs) significantly improve the performance of kernel methods in image classification tasks. A widely accepted explanation for their success is that these architectures encode hypothesis classes that are suitable for natural images. However, understanding the precise interplay between approximation and generalization in convolutional architectures remains a challenge. In this paper, we consider the stylized setting of covariates (image pixels) uniformly distributed on the hypercube, and characterize exactly the RKHS of kernels composed of single layers of convolution, pooling, and downsampling operations. We use this characterization to compute sharp asymptotics of the generalization error for any given function in high-dimension. In particular, we quantify the gain in sample complexity brought by enforcing locality with the convolution operation and approximate translation invariance with average pooling. Notably, these results provide a precise description of how convolution and pooling operations trade off approximation with generalization power in one layer convolutional kernels.
Author Information
Theodor Misiakiewicz (Stanford University)
Song Mei (University of California, Berkeley)
More from the Same Authors
-
2021 Spotlight: Understanding the Under-Coverage Bias in Uncertainty Estimation »
Yu Bai · Song Mei · Huan Wang · Caiming Xiong -
2022 Poster: Efficient Phi-Regret Minimization in Extensive-Form Games via Online Mirror Descent »
Yu Bai · Chi Jin · Song Mei · Ziang Song · Tiancheng Yu -
2022 Poster: Precise Learning Curves and Higher-Order Scalings for Dot-product Kernel Regression »
Lechao Xiao · Jeffrey Pennington · Theodor Misiakiewicz · Hong Hu · Yue Lu -
2022 Poster: Sample-Efficient Learning of Correlated Equilibria in Extensive-Form Games »
Ziang Song · Song Mei · Yu Bai -
2021 Poster: Understanding the Under-Coverage Bias in Uncertainty Estimation »
Yu Bai · Song Mei · Huan Wang · Caiming Xiong -
2020 Poster: When Do Neural Networks Outperform Kernel Methods? »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Poster: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari -
2019 Spotlight: Limitations of Lazy Training of Two-layers Neural Network »
Behrooz Ghorbani · Song Mei · Theodor Misiakiewicz · Andrea Montanari