Timezone: »
Image-to-image translation has recently achieved remarkable results. But despite current success, it suffers from inferior performance when translations between classes require large shape changes. We attribute this to the high-resolution bottlenecks which are used by current state-of-the-art image-to-image methods.
Therefore, in this work, we propose a novel deep hierarchical Image-to-Image Translation method, called DeepI2I. We learn a model by leveraging hierarchical features: (a) structural information contained in the bottom layers and (b) semantic information extracted from the top layers. To enable the training of deep I2I models on small datasets, we propose a novel transfer learning method, that transfers knowledge from pre-trained GANs. Specifically, we leverage the discriminator of a pre-trained GANs (i.e. BigGAN or StyleGAN) to initialize both the encoder and the discriminator and the pre-trained generator to initialize the generator of our model. Applying knowledge transfer leads to an alignment problem between the encoder and generator. We introduce an adaptor network to address this. On many-class image-to-image translation on three datasets (Animal faces, Birds, and Foods) we decrease mFID by at least 35% when compared to the state-of-the-art. Furthermore, we qualitatively and quantitatively demonstrate that transfer learning significantly improves the performance of I2I systems, especially for small datasets.
Finally, we are the first to perform I2I translations for domains with over 100 classes.
Author Information
yaxing wang (Centre de Visió per Computador (CVC))
Lu Yu (computer vision center, UAB)
Joost van de Weijer (Computer Vision Center Barcelona)
More from the Same Authors
-
2023 Poster: FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning »
Dipam Goswami · Yuyang Liu · Bartłomiej Twardowski · Joost van de Weijer -
2023 Poster: Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing »
kai wang · Fei Yang · Shiqi Yang · Muhammad Atif Butt · Joost van de Weijer -
2022 Workshop: Vision Transformers: Theory and applications »
Fahad Shahbaz Khan · Gul Varol · Salman Khan · Ping Luo · Rao Anwer · Ashish Vaswani · Hisham Cholakkal · Niki Parmar · Joost van de Weijer · Mubarak Shah -
2022 Spotlight: Lightning Talks 1B-4 »
Andrei Atanov · Shiqi Yang · Wanshan Li · Yongchang Hao · Ziquan Liu · Jiaxin Shi · Anton Plaksin · Jiaxiang Chen · Ziqi Pan · yaxing wang · Yuxin Liu · Stepan Martyanov · Alessandro Rinaldo · Yuhao Zhou · Li Niu · Qingyuan Yang · Andrei Filatov · Yi Xu · Liqing Zhang · Lili Mou · Ruomin Huang · Teresa Yeo · kai wang · Daren Wang · Jessica Hwang · Yuanhong Xu · Qi Qian · Hu Ding · Michalis Titsias · Shangling Jui · Ajay Sohmshetty · Lester Mackey · Joost van de Weijer · Hao Li · Amir Zamir · Xiangyang Ji · Antoni Chan · Rong Jin -
2022 Spotlight: Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation »
Shiqi Yang · yaxing wang · kai wang · Shangling Jui · Joost van de Weijer -
2022 Poster: Attracting and Dispersing: A Simple Approach for Source-free Domain Adaptation »
Shiqi Yang · yaxing wang · kai wang · Shangling Jui · Joost van de Weijer -
2021 Poster: Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation »
Shiqi Yang · yaxing wang · Joost van de Weijer · Luis Herranz · Shangling Jui -
2020 Poster: RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning »
Riccardo Del Chiaro · Bartłomiej Twardowski · Andrew Bagdanov · Joost van de Weijer -
2018 Poster: Image-to-image translation for cross-domain disentanglement »
Abel Gonzalez-Garcia · Joost van de Weijer · Yoshua Bengio -
2018 Poster: Memory Replay GANs: Learning to Generate New Categories without Forgetting »
Chenshen Wu · Luis Herranz · Xialei Liu · yaxing wang · Joost van de Weijer · Bogdan Raducanu -
2011 Poster: Portmanteau Vocabularies for Multi-Cue Image Representation »
Fahad S Khan · Joost van de Weijer · Andrew D Bagdanov · Maria Vanrell