Skip to yearly menu bar Skip to main content


Poster

Invisible Image Watermarks Are Provably Removable Using Generative AI

Xuandong Zhao · Kexun Zhang · Zihao Su · Saastha Vasan · Ilya Grishchenko · Christopher Kruegel · Giovanni Vigna · Yu-Xiang Wang · Lei Li

East Exhibit Hall A-C #4610
[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners. They also prevent people from misusing images, especially those generated by AI models.We propose a family of regeneration attacks to remove these invisible watermarks. The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image. This approach is flexible and can be instantiated with many existing image-denoising algorithms and pre-trained generative models such as diffusion models. Through formal proofs and extensive empirical evaluations, we demonstrate that pixel-level invisible watermarks are vulnerable to this regeneration attack.Our results reveal that, across four different pixel-level watermarking schemes, the proposed method consistently achieves superior performance compared to existing attack techniques, with lower detection rates and higher image quality.However, watermarks that keep the image semantically similar can be an alternative defense against our attacks.Our finding underscores the need for a shift in research/industry emphasis from invisible watermarks to semantic-preserving watermarks. Code is available at https://github.com/XuandongZhao/WatermarkAttacker

Live content is unavailable. Log in and register to view live content