Timezone: »

Generalization error in high-dimensional perceptrons: Approaching Bayes error with convex optimization
Benjamin Aubin · Florent Krzakala · Yue Lu · Lenka Zdeborová

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1402
We consider a commonly studied supervised classification of a synthetic dataset whose labels are generated by feeding a one-layer non-linear neural network with random iid inputs. We study the generalization performances of standard classifiers in the high-dimensional regime where $\alpha=\frac{n}{d}$ is kept finite in the limit of a high dimension $d$ and number of samples $n$. Our contribution is three-fold: First, we prove a formula for the generalization error achieved by $\ell_2$ regularized classifiers that minimize a convex loss. This formula was first obtained by the heuristic replica method of statistical physics. Secondly, focussing on commonly used loss functions and optimizing the $\ell_2$ regularization strength, we observe that while ridge regression performance is poor, logistic and hinge regression are surprisingly able to approach the Bayes-optimal generalization error extremely closely. As $\alpha \to \infty$ they lead to Bayes-optimal rates, a fact that does not follow from predictions of margin-based generalization error bounds. Third, we design an optimal loss and regularizer that provably leads to Bayes-optimal generalization error.

Author Information

Benjamin Aubin (Facebook AI)
Florent Krzakala (ENS Paris, Sorbonnes Université & EPFL)
Yue Lu (Harvard University)
Lenka Zdeborová (EPFL)

More from the Same Authors