Skip to yearly menu bar Skip to main content


Poster
in
Workshop: NeurIPS 2023 Workshop on Diffusion Models

Beyond Generation: Exploring Generalization of Diffusion Models in Few-shot Segmentation

Jie Liu · TAO HU · Jan-jakob Sonke · Efstratios Gavves


Abstract:

Diffusion models have demonstrated a superior capability for generating high-quality images. While their proficiency in image generation is evident, the generalization of diffusion models in few-shot segmentation remains scarcely explored. In this paper, we delve into the generalization of the pretrained diffusion model, specifically, Stable Diffusion, within the feature space for few-shot segmentation. First, we propose a straightforward strategy to extract intermediate knowledge from diffusion models as image features, applying them to the few-shot segmentation of real images. Second, we introduce a training-free method that employs pretrained diffusion features for few-shot segmentation. Through extensive experiments on two benchmarks, the proposed method utilizing diffusion features outperforms weakly-supervised few-shot segmentation methods and the DINO-V2 baseline. Without any training on base classes, it also attains comparable performance in comparison with supervised methods.

Chat is not available.