Timezone: »
Data poisoning---the process by which an attacker takes control of a model by making imperceptible changes to a subset of the training data---is an emerging threat in the context of neural networks. Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable for deep models. We propose MetaPoison, a first-order method that approximates the bilevel problem via meta-learning and crafts poisons that fool neural networks. MetaPoison is effective: it outperforms previous clean-label poisoning methods by a large margin. MetaPoison is robust: poisoned data made for one model transfer to a variety of victim models with unknown training settings and architectures. MetaPoison is general-purpose, it works not only in fine-tuning scenarios, but also for end-to-end training from scratch, which till now hasn't been feasible for clean-label attacks with deep nets. MetaPoison can achieve arbitrary adversary goals---like using poisons of one class to make a target image don the label of another arbitrarily chosen class. Finally, MetaPoison works in the real-world. We demonstrate for the first time successful data poisoning of models trained on the black-box Google Cloud AutoML API.
Author Information
W. Ronny Huang (Google Research)
Jonas Geiping (University of Siegen)
Hello, I’m Jonas . I conduct research in computer science as postdoc at the University of Maryland. My background is in Mathematics, more specifically in mathematical optimization and I am interested in research that intersects current deep learning and mathematical optimization, with my main area of applications being computer vision.
Liam Fowl (University of Maryland)
Gavin Taylor (US Naval Academy)
Tom Goldstein (University of Maryland)
More from the Same Authors
-
2020 : An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process »
David Tran · Alex Valtchanov · Keshav R Ganapathy · Raymond Feng · Eric Slud · Micah Goldblum · Tom Goldstein -
2021 : Execute Order 66: Targeted Data Poisoning for Reinforcement Learning via Minuscule Perturbations »
Harrison Foley · Liam Fowl · Tom Goldstein · Gavin Taylor -
2021 Poster: GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training »
Chen Zhu · Renkun Ni · Zheng Xu · Kezhi Kong · W. Ronny Huang · Tom Goldstein -
2021 Poster: Adversarial Examples Make Strong Poisons »
Liam Fowl · Micah Goldblum · Ping-yeh Chiang · Jonas Geiping · Wojciech Czaja · Tom Goldstein -
2020 : W Ronny Huang---Understanding Generalization through Visualizations »
W. Ronny Huang -
2020 : The Intrinsic Dimension of Images and Its Impact on Learning »
Chen Zhu · Micah Goldblum · Ahmed Abdelkader · Tom Goldstein · Phillip Pope -
2020 Workshop: Workshop on Dataset Curation and Security »
Nathalie Baracaldo Angel · Yonatan Bisk · Avrim Blum · Michael Curry · John Dickerson · Micah Goldblum · Tom Goldstein · Bo Li · Avi Schwarzschild -
2020 Poster: Detection as Regression: Certified Object Detection with Median Smoothing »
Ping-yeh Chiang · Michael Curry · Ahmed Abdelkader · Aounon Kumar · John Dickerson · Tom Goldstein -
2020 Poster: Certifying Confidence via Randomized Smoothing »
Aounon Kumar · Alexander Levine · Soheil Feizi · Tom Goldstein -
2020 Poster: Inverting Gradients - How easy is it to break privacy in federated learning? »
Jonas Geiping · Hartmut Bauermeister · Hannah Dröge · Michael Moeller -
2020 Poster: Adversarially Robust Few-Shot Learning: A Meta-Learning Approach »
Micah Goldblum · Liam Fowl · Tom Goldstein -
2020 Poster: Certifying Strategyproof Auction Networks »
Michael Curry · Ping-yeh Chiang · Tom Goldstein · John Dickerson -
2019 : Coffee/Poster session 1 »
Shiro Takagi · Khurram Javed · Johanna Sommer · Amr Sharaf · Pierluca D'Oro · Ying Wei · Sivan Doveh · Colin White · Santiago Gonzalez · Cuong Nguyen · Mao Li · Tianhe Yu · Tiago Ramalho · Masahiro Nomura · Ahsan Alvi · Jean-Francois Ton · W. Ronny Huang · Jessica Lee · Sebastian Flennerhag · Michael Zhang · Abram Friesen · Paul Blomstedt · Alina Dubatovka · Sergey Bartunov · Subin Yi · Iaroslav Shcherbatyi · Christian Simon · Zeyuan Shang · David MacLeod · Lu Liu · Liam Fowl · Diego Mesquita · Deirdre Quillen -
2019 Poster: Adversarial training for free! »
Ali Shafahi · Mahyar Najibi · Mohammad Amin Ghiasi · Zheng Xu · John Dickerson · Christoph Studer · Larry Davis · Gavin Taylor · Tom Goldstein -
2018 Poster: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks »
Ali Shafahi · W. Ronny Huang · Mahyar Najibi · Octavian Suciu · Christoph Studer · Tudor Dumitras · Tom Goldstein -
2018 Poster: Visualizing the Loss Landscape of Neural Nets »
Hao Li · Zheng Xu · Gavin Taylor · Christoph Studer · Tom Goldstein -
2017 Poster: Training Quantized Nets: A Deeper Understanding »
Hao Li · Soham De · Zheng Xu · Christoph Studer · Hanan Samet · Tom Goldstein -
2015 : Spotlight »
Furong Huang · William Gray Roncal · Tom Goldstein -
2015 Poster: Adaptive Primal-Dual Splitting Methods for Statistical Learning and Image Processing »
Tom Goldstein · Min Li · Xiaoming Yuan