Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Adversarial Diffusion Models for Unsupervised Medical Image Synthesis

Muzaffer Özbey · Onat Dalmaz · Atakan Bedel · Salman Ul Hassan Dar · Şaban Öztürk · Alper Güngör · Tolga Cukur


Abstract:

Generative adversarial networks (GAN) that learn a one-shot mapping from source to target images have been established as state-of-the-art in medical image synthesis tasks. GAN models implicitly characterize the target image distribution, so they can suffer from limited sample fidelity and diversity. Here, we propose a novel method based on diffusion modeling, SynDiff, for improved reliability and performance in medical image synthesis. To learn a direct correlate of the image distribution, SynDiff employs conditional diffusion to gradually map noise and source images onto the target image. For fast sampling during inference, large step sizes are coupled with adversarial projections for reverse diffusion. To enable training on unpaired datasets, a cycle-consistent architecture is introduced with coupled diffusion processes that synthesize the target given source and vice versa. Experiments on a public multi-contrast MRI dataset indicate the superiority of SynDiff against competing GAN and diffusion models.

Chat is not available.