Timezone: »

 
Self-Guided Diffusion Model
TAO HU · David Zhang · Yuki Asano · Gertjan Burghouts · Cees Snoek
Event URL: https://openreview.net/forum?id=Mf6NLebyqdq »

Diffusion models have demonstrated remarkable progress in image generation quality, especially when guidance is used to control the generative process. However, such guidance requires a large amount of image-annotation pairs for training and is thus dependent on their availability, correctness and unbiasedness. In this paper, we aim to eliminate the need for such annotation by instead leveraging the flexibility of self-supervision signals to design a framework for self-guided diffusion models. By leveraging a feature extraction function and a self-annotation function, our method provides flexible guidance signals at various image granularities: from the level of holistic images to object boxes and even segmentation masks.

Author Information

TAO HU (University of Amsterdam)
David Zhang (University of Amsterdam)
Yuki Asano (University of Amsterdam)
Gertjan Burghouts (TNO - Intelligent Imaging)
Cees Snoek (University of Amsterdam)

More from the Same Authors