Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Deep Learning

Power-law asymptotics of the generalization error for GP regression under power-law priors and targets

Hui Jin · Pradeep Kr. Banerjee · Guido Montufar


Abstract: We study the power-law asymptotics of learning curves for Gaussian process regression (GPR). When the eigenspectrum of the prior decays with rate $\alpha$ and the eigenexpansion coefficients of the target function decay with rate $\beta$, we show that the Bayesian generalization error behaves as $\tilde O(n^{\max\{\frac{1}{\alpha}-1, \frac{1-2\beta}{\alpha}\}})$ with high probability over the draw of $n$ input samples. Infinitely wide neural networks can be related to GPR with respect to the Neural Network Gaussian Process kernel, which in several cases is known to have a power-law spectrum. Hence our methods can be applied to study the generalization error of infinitely wide neural networks. We present toy experiments demonstrating the theory.

Chat is not available.