Timezone: »

Handcrafted Backdoors in Deep Neural Networks
Sanghyun Hong · Nicholas Carlini · Alexey Kurakin

Thu Dec 01 09:30 AM -- 11:00 AM (PST) @ Hall J #512
When machine learning training is outsourced to third parties, $backdoor$ $attacks$ become practical as the third party who trains the model may act maliciously to inject hidden behaviors into the otherwise accurate model. Until now, the mechanism to inject backdoors has been limited to $poisoning$. We argue that a supply-chain attacker has more attack techniques available by introducing a $handcrafted$ attack that directly manipulates a model's weights. This direct modification gives our attacker more degrees of freedom compared to poisoning, and we show it can be used to evade many backdoor detection or removal defenses effectively. Across four datasets and four network architectures our backdoor attacks maintain an attack success rate above 96%. Our results suggest that further research is needed for understanding the complete space of supply-chain backdoor attacks.

Author Information

Sanghyun Hong (Oregon State University)
Nicholas Carlini (Google)
Alexey Kurakin (Google Brain)

More from the Same Authors