Poster
Lower Bounds on Adversarial Robustness from Optimal Transport
Arjun Nitin Bhagoji · Daniel Cullina · Prateek Mittal
East Exhibition Hall B, C #94
Keywords: [ Learning Theory ] [ Theory ] [ Applications ] [ Privacy, Anonymity, and Security ]
[
Abstract
]
Abstract:
While progress has been made in understanding the robustness of machine learning classifiers to test-time adversaries (evasion attacks), fundamental questions remain unresolved. In this paper, we use optimal transport to characterize the maximum achievable accuracy in an adversarial classification scenario. In this setting, an adversary receives a random labeled example from one of two classes, perturbs the example subject to a neighborhood constraint, and presents the modified example to the classifier. We define an appropriate cost function such that the minimum transportation cost between the distributions of the two classes determines the \emph{minimum $0-1$ loss for any classifier}. When the classifier comes from a restricted hypothesis class, the optimal transportation cost provides a lower bound. We apply our framework to the case of Gaussian data with norm-bounded adversaries and explicitly show matching bounds for the classification and transport problems and the optimality of linear classifiers. We also characterize the sample complexity of learning in this setting, deriving and extending previously known results as a special case. Finally, we use our framework to study the gap between the optimal classification performance possible and that currently achieved by state-of-the-art robustly trained neural networks for datasets of interest, namely, MNIST, Fashion MNIST and CIFAR-10.
Live content is unavailable. Log in and register to view live content