Timezone: »
An off-the-shelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC). Specifically, we present SAC-w that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at https://github.com/guanjiyang/SAC.
Author Information
Jiyang Guan (Institute of Automation, Chinese Academy of Sciences)
Jian Liang (CASIA)
Ran He (NLPR, CASIA)
More from the Same Authors
-
2022 Spotlight: Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks »
Jiyang Guan · Jian Liang · Ran He -
2022 Spotlight: Lightning Talks 3A-1 »
Shu Ding · Wanxing Chang · Jiyang Guan · Mouxiang Chen · Guan Gui · Yue Tan · Shiyun Lin · Guodong Long · Yuze Han · Wei Wang · Zhen Zhao · Ye Shi · Jian Liang · Chenghao Liu · Lei Qi · Ran He · Jie Ma · Zemin Liu · Xiang Li · Hoang Tuan · Luping Zhou · Zhihua Zhang · Jianling Sun · Jingya Wang · LU LIU · Tianyi Zhou · Lei Wang · Jing Jiang · Yinghuan Shi -
2022 Poster: Orthogonal Transformer: An Efficient Vision Transformer Backbone with Token Orthogonalization »
Huaibo Huang · Xiaoqiang Zhou · Ran He -
2021 Poster: No Fear of Heterogeneity: Classifier Calibration for Federated Learning with Non-IID Data »
Mi Luo · Fei Chen · Dapeng Hu · Yifan Zhang · Jian Liang · Jiashi Feng -
2021 Poster: Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning »
Yifan Zhang · Bryan Hooi · Dapeng Hu · Jian Liang · Jiashi Feng -
2020 Poster: AOT: Appearance Optimal Transport Based Identity Swapping for Forgery Detection »
Hao Zhu · Chaoyou Fu · Qianyi Wu · Wayne Wu · Chen Qian · Ran He -
2019 Poster: Dual Variational Generation for Low Shot Heterogeneous Face Recognition »
Chaoyou Fu · Xiang Wu · Yibo Hu · Huaibo Huang · Ran He -
2019 Spotlight: Dual Variational Generation for Low Shot Heterogeneous Face Recognition »
Chaoyou Fu · Xiang Wu · Yibo Hu · Huaibo Huang · Ran He -
2018 Poster: Learning a High Fidelity Pose Invariant Model for High-resolution Face Frontalization »
Jie Cao · Yibo Hu · Hongwen Zhang · Ran He · Zhenan Sun -
2018 Poster: IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis »
Huaibo Huang · zhihang li · Ran He · Zhenan Sun · Tieniu Tan -
2017 Poster: Deep Supervised Discrete Hashing »
Qi Li · Zhenan Sun · Ran He · Tieniu Tan