Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

Diagnostics for Deep Neural Networks with Automated Copy/Paste Attacks

Stephen Casper · Kaivalya Hariharan · Dylan Hadfield-Menell


Abstract:

Deep neural networks (DNNs) are powerful, but they can make mistakes that pose risks. A model performing well on a test set does not imply safety in deployment, so it is important to have additional evaluation tools to understand flaws. Adversarial examples can help reveal weaknesses, but they are often difficult for a human to interpret or draw generalizable conclusions from. Some previous works have addressed this by studying human-interpretable attacks. We build on these with three contributions. First, we introduce a method termed Search for Natural Adversarial Features Using Embeddings (SNAFUE) which offers a fully-automated method for finding "copy/paste" attacks in which one natural image can be pasted into another in order to induce an unrelated misclassification. Second, we use this to red team an ImageNet classifier and identify hundreds of easily-describable sets of vulnerabilities. Third, we compare this approach with other interpretability tools by attempting to rediscover trojans. Our results suggest that SNAFUE can be useful for interpreting DNNs and generating adversarial data for them.

Chat is not available.