Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

Improved Convergence of Score-Based Diffusion Models via Prediction-Correction

Francesco Pedrotti · Jan Maas · Marco Mondelli


Abstract: Score-based generative models (SGMs) are powerful tools to sample from complex data distributions. The idea is to run an ergodic stochastic process for time $T_1$ and then learn to revert this process. As the approximate reverse process is initialized with the stationary distribution of the forward one, the existing analysis paradigm requires $T_1\to\infty$. This is however problematic, as it leads to error propagation, unstable convergence results and increased computational cost. We address the issue by considering a version of the popular predictor-corrector scheme: after running the forward process, we first estimate the final distribution via an inexact Langevin dynamics and then revert the process.Our main results provide convergence guarantees for this scheme, which has the key advantage that the forward process is required to run only for a fixed finite time $T_1$.Our bounds exhibit a mild logarithmic dependence on the input dimension and the subgaussian norm of the target distribution, have minimal assumptions on the data, and require only to control the $L^2$ loss of the score approximation.

Chat is not available.