Timezone: »

One-Sided Unsupervised Domain Mapping
Sagie Benaim · Lior Wolf

Tue Dec 05 06:30 PM -- 10:30 PM (PST) @ Pacific Ballroom #92 #None
In unsupervised domain mapping, the learner is given two unmatched datasets $A$ and $B$. The goal is to learn a mapping $G_{AB}$ that translates a sample in $A$ to the analog sample in $B$. Recent approaches have shown that when learning simultaneously both $G_{AB}$ and the inverse mapping $G_{BA}$, convincing mappings are obtained. In this work, we present a method of learning $G_{AB}$ without learning $G_{BA}$. This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint. Our entire code is made publicly available at~\url{https://github.com/sagiebenaim/DistanceGAN}.

Author Information

Sagie Benaim (Tel Aviv University)
Lior Wolf (Facebook AI Research)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors