Timezone: »

Towards Diverse and Faithful One-shot Adaption of Generative Adversarial Networks
Yabo Zhang · mingshuai Yao · Yuxiang Wei · Zhilong Ji · Jinfeng Bai · Wangmeng Zuo

One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only. However, it remains very challenging for the adapted generator (i) to generate diverse images inherited from the pre-trained generator while (ii) faithfully acquiring the domain-specific attributes and styles of the reference image. In this paper, we present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation. For global-level adaptation, we leverage the difference between the CLIP embedding of the reference image and the mean embedding of source images to constrain the target generator. For local-level adaptation, we introduce an attentive style loss which aligns each intermediate token of an adapted image with its corresponding token of the reference image. To facilitate diverse generation, selective cross-domain consistency is introduced to select and retain domain-sharing attributes in the editing latent $\mathcal{W}+$ space to inherit the diversity of the pre-trained generator. Extensive experiments show that our method outperforms the state-of-the-arts both quantitatively and qualitatively, especially for the cases of large domain gap. Moreover, our DiFa can easily be extended to zero-shot generative domain adaption with appealing results.

Author Information

Yabo Zhang (Harbin Institute of Technology)
mingshuai Yao (Dalian University of Technology)
Yuxiang Wei (Harbin Institute of Technology)
Zhilong Ji (Tomorrow Advancing Life)
Jinfeng Bai (Institute of automation, Chinese academy of science, Chinese Academy of Sciences)
Wangmeng Zuo (Harbin Institute of Technology)

More from the Same Authors