Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

Strong generalization in diffusion models

Zahra Kadkhodaie · Florentin Guth · Eero Simoncelli · Stephane Mallat


Abstract:

High-quality samples generated with score-based reverse diffusion algorithms provide evidence that deep neural networks (DNNs) trained for denoising can learn high-dimensional densities, despite the curse of dimensionality. However, recent reports of memorization of the training set raise the question of whether these networks are learning the true'' continuous density of the data. Here, we show that two denoising DNNs trained on non-overlapping subsets of a dataset learn nearly the same score function, and thus the same density, with a surprisingly small number of training images. This strong generalization demonstrates an alignment of powerful inductive biases in the DNN architecture and/or training algorithm with properties of the data distribution. Our method is general and can be applied to assess generalization vs.\ memorization in any generative model.

Chat is not available.