Poster
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman · Greg Yang · Huan Zhang · Cho-Jui Hsieh · Pengchuan Zhang
East Exhibition Hall B, C #152
Keywords: [ Applications ] [ Privacy, Anonymity, and Security ] [ Supervised Deep Networks ] [ Algorithms -> Adversarial Learning; Deep Learning -> Optimization for Deep Networks; Deep Learning ]
Verification of neural networks enables us to gauge their robustness against adversarial attacks. Verification algorithms fall into two categories: exact verifiers that run in exponential time and relaxed verifiers that are efficient but incomplete. In this paper, we unify all existing LP-relaxed verifiers, to the best of our knowledge, under a general convex relaxation framework. This framework works for neural networks with diverse architectures and nonlinearities and covers both primal and dual views of neural network verification. Next, we perform large-scale experiments, amounting to more than 22 CPU-years, to obtain exact solution to the convex-relaxed problem that is optimal within our framework for ReLU networks. We find the exact solution does not significantly improve upon the gap between PGD and existing relaxed verifiers for various networks trained normally or robustly on MNIST and CIFAR datasets. Our results suggest there is an inherent barrier to tight verification for the large class of methods captured by our framework. We discuss possible causes of this barrier and potential future directions for bypassing it.
Live content is unavailable. Log in and register to view live content