Skip to yearly menu bar Skip to main content


Poster

Multi-mapping Image-to-Image Translation via Learning Disentanglement

Xiaoming Yu · Yuanqi Chen · Shan Liu · Thomas Li · Ge Li

East Exhibition Hall B, C #88

Keywords: [ Applications ] [ Computer Vision ] [ Generative Models ] [ Algorithms -> Unsupervised Learning; Deep Learning -> Adversarial Networks; Deep Learning ]


Abstract:

Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation. However, the existing methods only consider one of the two perspectives, which makes them unable to solve each other's problem. To address this issue, we propose a novel unified model, which bridges these two objectives. First, we disentangle the input images into the latent representations by an encoder-decoder architecture with a conditional adversarial training in the feature space. Then, we encourage the generator to learn multi-mappings by a random cross-domain translation. As a result, we can manipulate different parts of the latent representations to perform multi-modal and multi-domain translations simultaneously. Experiments demonstrate that our method outperforms state-of-the-art methods.

Live content is unavailable. Log in and register to view live content