Skip to yearly menu bar Skip to main content


Spotlight Poster

Reliable Learning of Halfspaces under Gaussian Marginals

Ilias Diakonikolas · Lisheng Ren · Nikos Zarifis

East Exhibit Hall A-C #4205
[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract: We study the problem of PAC learning halfspaces in the reliable agnostic model of Kalai et al. (2012).The reliable PAC model captures learning scenarios where one type of error is costlier than the others. Our main positive result is a new algorithm for reliable learning of Gaussian halfspaces on $\mathbb{R}^d$ with sample and computational complexity $d^{O(\log (\min\{1/\alpha, 1/\epsilon\}))}\min (2^{\log(1/\epsilon)^{O(\log (1/\alpha))}},2^{\mathrm{poly}(1/\epsilon)})$, where $\epsilon$ is the excess error and $\alpha$ is the bias of the optimal halfspace. We complement our upper bound with a Statistical Query lower bound suggesting that the $d^{\Omega(\log (1/\alpha))}$ dependence is best possible. Conceptually, our results imply a strong computational separation between reliable agnostic learning and standard agnostic learning of halfspaces in the Gaussian setting.

Live content is unavailable. Log in and register to view live content