Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Backdoors in Deep Learning: The Good, the Bad, and the Ugly

Clean-label Backdoor Attacks by Selectively Poisoning with Limited Information from Target Class

Nguyen Hung-Quang · Ngoc-Hieu Nguyen · The Anh Ta · Thanh Nguyen-Tang · Hoang Thanh-Tung · Khoa D Doan

[ ] [ Project Page ]
Fri 15 Dec 1 p.m. PST — 1:45 p.m. PST

Abstract:

Deep neural networks have been shown to be vulnerable to backdoor attacks, in which the adversary manipulates the training dataset to mislead the model when the trigger appears, while it still behaves normally on benign data. Clean label attacks can succeed without modifying the semantic label of poisoned data, which are more stealthy but, on the other hand, are more challenging. To control the victim model, existing works focus on adding triggers to a random subset of the dataset, neglecting the fact that samples contribute unequally to the success of the attack and, therefore do not exploit the full potential of the backdoor. Some recent studies propose different strategies to select samples by recording the forgetting events or looking for hard samples with a supervised trained model. However, these methods require training and assume that the attacker has access to the whole labeled training set, which is not always the case in practice. In this work, we consider a more practical setting where the attacker only provides a subset of the dataset with the target label and has no knowledge of the victim model, and propose a method to select samples to poison more effectively. Our method takes advantage of pretrained self-supervised models, therefore incurs no extra computational cost for training, and can be applied to any victim model. Experiments on benchmark datasets illustrate the effectiveness of our strategy in improving clean-label backdoor attacks. Our strategy helps SIG reach 91\% success rate with only 10\% poisoning ratio.

Chat is not available.