Timezone: »

 
Poster
Cross-Domain Transferability of Adversarial Perturbations
Muhammad Muzammal Naseer · Salman H Khan · Muhammad Haris Khan · Fahad Shahbaz Khan · Fatih Porikli

Thu Dec 12 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #14
Adversarial examples reveal the blind spots of deep neural networks (DNNs) and represent a major concern for security-critical applications. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. The underlying assumption in most adversary generation methods, whether learning an instance-specific or an instance-agnostic perturbation, is the direct or indirect reliance on the original domain-specific data distribution. In this work, for the first time, we demonstrate the existence of domain-invariant adversaries, thereby showing common adversarial space among different datasets and models. To this end, we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on wholly different domains. For instance, an adversarial function learned on Paintings, Cartoons or Medical images can successfully perturb ImageNet samples to fool the classifier, with success rates as high as $\sim$99\% ($\ell_{\infty} \le 10$). The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbations. Our approach sets the new state-of-the-art for fooling rates, both under the white-box and black-box scenarios. Furthermore, despite being an instance-agnostic perturbation function, our attack outperforms the conventionally much stronger instance-specific attack methods.

Author Information

Muhammad Muzammal Naseer (Australian National University (ANU))
Salman H Khan (Inception Institute of Artificial Intelligence)
Muhammad Haris Khan (Inception Institute of Artificial Intelligence)
Fahad Shahbaz Khan (Inception Institute of Artificial Intelligence)
Fatih Porikli (ANU)

Prof Fatih Porikli is an IEEE Fellow and a Professor in the Research School of Engineering, Australian National University (ANU), Canberra. He is currently acting as the Vice President of San Diego Device Hardware Competency Center, Futurewei, San Diego. He was the Chief Scientist of Autonomous Vehicles at Futurewei, Santa Clara. Until 2017, he was the Computer Vision Research Group Leader at Data61/CSIRO, Australia (before merging, at NICTA). He has received his PhD from New York University (NYU), New York in 2002. Previously he served a Distinguished Research Scientist at Mitsubishi Electric Research Laboratories (MERL), Cambridge. Before joining MERL in 2000, he developed satellite imaging solutions at HRL, Malibu CA, and 3D display systems at AT&T Research Labs, Middletown, NJ.

More from the Same Authors