Timezone: »
Deep neural networks have been shown vulnerable to adversarial examples. Even though many defence methods have been proposed to enhance the robustness, it is still a long way toward providing an attack-free method to build a trustworthy machine learning system. In this paper, instead of enhancing the robustness, we take the investigator's perspective and propose a new framework to trace the first compromised model in a forensic investigation manner. Specifically, we focus on the following setting: the machine learning service provider provides models for a set of customers. However, one of the customers conducted adversarial attacks to fool the system. Therefore, the investigator's objective is to identify the first compromised model by collecting and analyzing evidence from only available adversarial examples. To make the tracing viable, we design a random mask watermarking mechanism to differentiate adversarial examples from different models. First, we propose a tracing approach in the data-limited case where the original example is also available. Then, we design a data-free approach to identify the adversary without accessing the original example. Finally, the effectiveness of our proposed framework is evaluated by extensive experiments with different model architectures, adversarial attacks, and datasets.
Author Information
Minhao Cheng (Hong Kong University of Science and Technology)
Rui Min (Sensetime Group Limited)
More from the Same Authors
-
2022 : Trusted Aggregation (TAG): Model Filtering Backdoor Defense In Federated Learning »
Joseph Lavond · Minhao Cheng · Yao Li -
2022 : FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning »
Yuanhao Xiong · Ruochen Wang · Minhao Cheng · Felix Yu · Cho-Jui Hsieh -
2022 : Defend Against Textual Backdoor Attacks By Token Substitution »
Xinglin Li · Yao Li · Minhao Cheng -
2022 Poster: Efficient Non-Parametric Optimizer Search for Diverse Tasks »
Ruochen Wang · Yuanhao Xiong · Minhao Cheng · Cho-Jui Hsieh -
2022 Poster: Random Sharpness-Aware Minimization »
Yong Liu · Siqi Mai · Minhao Cheng · Xiangning Chen · Cho-Jui Hsieh · Yang You