Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

Multiple Remote Adversarial Patches: Generating Patches based on Diffusion Models for Object Detection using CNNs

Kento Oonishi · Tsunato Nakai · Daisuke Suzuki


Abstract:

Adversarial patches can fool object detection systems, which poses a severe threat to machine learning models. Many researchers have focused on strong adversarial patches. Remote adversarial patches, placed outside the target objects, are candidates of strong adversarial patches. This study gives a concrete model of adversarial patches on convolutional neural networks (CNNs), namely diffusion model. Our diffusion model shows that multiple remote adversarial patches pose severe threats on YOLOv2 CNN. Our experiment also demonstrates that two remote adversarial patches reduce the average existence probability to 12.81%, whereas Saha et al.'s original single adversarial patch reduced the average existence probability to 50.95%. Moreover, we generate adversarial patches on SSD architecture. In SSD architecture, two remote adversarial patches also significantly reduce the average existence probability from 24.52% to 6.12%. By the above results, this paper provides a framework for analyzing the effect of adversarial patch attacks.

Chat is not available.