Poster
Nearly Tight Bounds for Robust Proper Learning of Halfspaces with a Margin
Ilias Diakonikolas · Daniel Kane · Pasin Manurangsi
East Exhibition Hall B, C #233
Keywords: [ Learning Theory ] [ Theory ] [ Hardness of Learning and Approximations ]
[
Abstract
]
Abstract:
We study the problem of {\em properly} learning large margin halfspaces in the agnostic PAC model.
In more detail, we study the complexity of properly learning $d$-dimensional halfspaces
on the unit ball within misclassification error $\alpha \cdot \opt_{\gamma} + \eps$, where $\opt_{\gamma}$
is the optimal $\gamma$-margin error rate and $\alpha \geq 1$ is the approximation ratio.
We give learning algorithms and computational hardness results
for this problem, for all values of the approximation ratio $\alpha \geq 1$,
that are nearly-matching for a range of parameters.
Specifically, for the natural setting that $\alpha$ is any constant bigger
than one, we provide an essentially tight complexity characterization.
On the positive side, we give an $\alpha = 1.01$-approximate proper learner
that uses $O(1/(\eps^2\gamma^2))$ samples (which is optimal) and runs in time
$\poly(d/\eps) \cdot 2^{\tilde{O}(1/\gamma^2)}$. On the negative side,
we show that {\em any} constant factor approximate proper learner has runtime
$\poly(d/\eps) \cdot 2^{(1/\gamma)^{2-o(1)}}$,
assuming the Exponential Time Hypothesis.
Live content is unavailable. Log in and register to view live content