Timezone: »

Adversarial Examples Make Strong Poisons
Liam Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Wojciech Czaja · Tom Goldstein

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None

The adversarial machine learning literature is largely partitioned into evasion attacks on testing data and poisoning attacks on training data. In this work, we show that adversarial examples, originally intended for attacking pre-trained models, are even more effective for data poisoning than recent methods designed specifically for poisoning. In fact, adversarial examples with labels re-assigned by the crafting network remain effective for training, suggesting that adversarial examples contain useful semantic content, just with the "wrong" labels (according to a network, but not a human). Our method, adversarial poisoning, is substantially more effective than existing poisoning methods for secure dataset release, and we release a poisoned version of ImageNet, ImageNet-P, to encourage research into the strength of this form of data obfuscation.

Author Information

Liam Fowl (University of Maryland)
Micah Goldblum (University of Maryland)
Ping-yeh Chiang (University of Maryland, College Park)
Jonas Geiping (University of Maryland, College Park)

Hello, I’m Jonas . I conduct research in computer science as postdoc at the University of Maryland. My background is in Mathematics, more specifically in mathematical optimization and I am interested in research that intersects current deep learning and mathematical optimization, with my main area of applications being computer vision.

Wojciech Czaja
Tom Goldstein (Rice University)

More from the Same Authors