Skip to yearly menu bar Skip to main content


Poster

AdvAD: Exploring Non-Parametric Diffusion for Imperceptible Adversarial Attacks

Jin Li · Ziqiang He · Anwei Luo · Jian-Fang Hu · Z. Jane Wang · Xiangui Kang

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract: Imperceptible adversarial attacks aim to fool DNNs by adding imperceptible perturbation to the input data. Previous methods typically improve the imperceptibility of attacks by integrating common attack paradigms with specifically designed perception-based losses or the capabilities of generative models. In this paper, we propose Adversarial Attacks in Diffusion (AdvAD), a novel modeling framework distinct from existing attack paradigms. AdvAD innovatively conceptualizes attacking as a non-parametric diffusion process by theoretically exploring basic modeling approach rather than using the denoising or generation abilities of regular diffusion models with neural networks. At each step, much subtler yet effective adversarial guidance is crafted using only the attacked model without any additional network, which gradually leads the end of diffusion process from the original image to a desired imperceptible adversarial example. Grounded in a solid theory of the proposed non-parametric diffusion process, AdvAD achieves high attack efficacy and imperceptibility with intrinsically lower overall perturbation strength. Additionally, an enhanced version AdvAD-X is proposed to evaluate the extreme performance of our modeling framework under an ideal scenario. Extensive experiments demonstrate the effectiveness of the proposed AdvAD and AdvAD-X. Compared with state-of-the-art imperceptible attacks, AdvAD achieves an average of 99.9$\%$ (+17.3%) ASR with 1.34 (-0.97) l2 distance, 49.74 (+4.76) PSNR and 0.9974 (+0.0043) SSIM against four prevalent DNNs with three different architectures on the ImageNet-compatible dataset.

Live content is unavailable. Log in and register to view live content