Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

Generalized Contrastive Divergence: Joint Training of Energy-Based Model and Diffusion Model through Inverse Reinforcement Learning

Sangwoong Yoon · Dohyun Kwon · Himchan Hwang · Yung-Kyun Noh · Frank Park


Abstract:

We present Generalized Contrastive Divergence (GCD), a novel objective function for training an energy-based model (EBM) and a sampler simultaneously. GCD generalizes Contrastive Divergence, a celebrated algorithm for training EBM, by replacing MCMC distribution with a trainable sampler, such as a diffusion model. In GCD, the joint training of EBM and a diffusion model is formulated as a minimax problem, which reaches an equilibrium when both models converge to the data distribution. The minimax learning with GCD bears interesting equivalence to inverse reinforcement learning, where the energy corresponds to negative reward, the diffusion model is a policy, and the real data is expert demonstrations. We present preliminary yet promising results showing that the joint training is beneficial for both EBM and a diffusion model. Particularly, GCD learning can be employed to fine-tune a diffusion model to boost its sample quality.

Chat is not available.