Diffusion Generative Models meet Differential Privacy: A Theoretical Insight
Ziyu Huang · Wenpin Tang
Abstract
Score-based diffusion models have emerged as popular generative models trained on increasingly large datasets, yet they are often susceptible to attacks that can disclose sensitive information. To offer Differential Privacy (DP) guarantees, training these models for score-matching with DP-SGD has become a common solution. In this work, we study Differentially Private Diffusion Models (DPDM) both theoretically and empirically. We provide a quantitative $L^2$ rate of DP-SGD to its global optimum, leading to the first error analysis of diffusion models trained with DP-SGD. Our theoretical framework contributes to uncertainty quantification in generative AI systems, providing essential convergence guarantees for trustworthy decision-making applications that require both privacy preservation and reliability.
Chat is not available.
Successful Page Load