Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

On Optimal Learning Under Targeted Data Poisoning

Steve Hanneke · Amin Karbasi · Mohammad Mahmoody · Idan Mehalel · Shay Moran

Hall J (level 1) #822

Abstract: Consider the task of learning a hypothesis class H in the presence of an adversary that can replace up to an η fraction of the examples in the training set with arbitrary adversarial examples. The adversary aims to fail the learner on a particular target test point x which is \emph{known} to the adversary but not to the learner. In this work we aim to characterize the smallest achievable error ϵ=ϵ(η) by the learner in the presence of such an adversary in both realizable and agnostic settings. We fully achieve this in the realizable setting, proving that ϵ=Θ(VC(H)η), where VC(H) is the VC dimension of H. Remarkably, we show that the upper bound can be attained by a deterministic learner. In the agnostic setting we reveal a more elaborate landscape: we devise a deterministic learner with a multiplicative regret guarantee of ϵCOPT+O(VC(H)η), where C>1 is a universal numerical constant. We complement this by showing that for any deterministic learner there is an attack which worsens its error to at least 2OPT. This implies that a multiplicative deterioration in the regret is unavoidable in this case. Finally, the algorithms we develop for achieving the optimal rates are inherently improper. Nevertheless, we show that for a variety of natural concept classes, such as linear classifiers, it is possible to retain the dependence ϵ=ΘH(η) by a proper algorithm in the realizable setting. Here ΘH conceals a polynomial dependence on VC(H).

Chat is not available.