Spotlight
Regularization Matters: Generalization and Optimization of Neural Nets v.s. their Induced Kernel
Colin Wei · Jason Lee · Qiang Liu · Tengyu Ma

Thu Dec 12th 04:50 -- 04:55 PM @ West Ballrooms A + B

Recent works have shown that on sufficiently over-parametrized neural nets, gradient descent with relatively large initialization optimizes a prediction function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to global convergence results but does not work when there is a standard $\ell_2$ regularizer, which is useful to have in practice. We show that sample efficiency can indeed depend on the presence of the regularizer: we construct a simple distribution in $d$ dimensions which the optimal regularized neural net learns with $O(d)$ samples but the NTK requires $\Omega(d^2)$ samples to learn. To prove this, we establish two analysis tools: i) for multi-layer feedforward ReLU nets, we show that the global minimizer of a weakly-regularized cross-entropy loss is the max normalized margin solution among all neural nets, which generalizes well; ii) we develop a new technique for proving lower bounds for kernel methods, which relies on showing that the kernel cannot focus on informative features. Motivated by our generalization results, we study whether the regularized global optimum is attainable. We prove that for infinite-width two-layer nets, noisy gradient descent optimizes the regularized neural net loss to a global minimum in polynomial iterations.

Author Information

Colin Wei (Stanford University)
Jason Lee (Princeton University)
Qiang Liu (UT Austin)
Tengyu Ma (Stanford University)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors