Skip to yearly menu bar Skip to main content


Poster
in
Workshop: CtrlGen: Controllable Generative Modeling in Language and Vision

C^3: Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation

Hyukgi Lee · Gi-Cheon Kang · Chang-Hoon Jeong · Hanwool Sul · Byoung-Tak Zhang


Abstract:

Few-shot image generation is a task of generating high-quality and diverse images well fitted to the target domain. The generative model should adapt from the source domain to the target domain given a few images. Despite recent progresses in generative models, cutting edge generative models (e.g., GANs) still suffer from synthesizing high-quality and diverse images in few-shot setting. One of the biggest hurdles is that the number of images from the target domain is too small to approximate the true distribution of the target domain. To this end, the effective approach for the few-shot adaption is required to address the problem. In this paper, we propose a simple yet effective method C^3, Contrastive Learning for Cross-domain Correspondence. C^3 method constitutes the positive and negative pairs of images from two different domains and makes the generative model learn the cross-domain correspondence (i.e., semantic mapping from the source domain to the target domain) explicitly via contrastive learning. As a result, our proposed method generates more realistic and diverse images compared to the baseline methods and outperforms the state-of-the-art approaches on photorealistic and non-photorealistic domains.