Poster
Multi-source Domain Adaptation for Semantic Segmentation
Sicheng Zhao · Bo Li · Xiangyu Yue · Yang Gu · Pengfei Xu · Runbo Hu · Hua Chai · Kurt Keutzer
East Exhibition Hall B, C #80
Keywords: [ Applications ] [ Image Segmentation ] [ Applications -> Visual Scene Analysis and Interpretation; Deep Learning ] [ Adversarial Networks ]
Simulation-to-real domain adaptation for semantic segmentation has been actively studied for various applications such as autonomous driving. Existing methods mainly focus on a single-source setting, which cannot easily handle a more practical scenario of multiple sources with different distributions. In this paper, we propose to investigate multi-source domain adaptation for semantic segmentation. Specifically, we design a novel framework, termed Multi-source Adversarial Domain Aggregation Network (MADAN), which can be trained in an end-to-end manner. First, we generate an adapted domain for each source with dynamic semantic consistency while aligning at the pixel-level cycle-consistently towards the target. Second, we propose sub-domain aggregation discriminator and cross-domain cycle discriminator to make different adapted domains more closely aggregated. Finally, feature-level alignment is performed between the aggregated domain and target domain while training the segmentation network. Extensive experiments from synthetic GTA and SYNTHIA to real Cityscapes and BDDS datasets demonstrate that the proposed MADAN model outperforms state-of-the-art approaches. Our source code is released at: https://github.com/Luodian/MADAN.
Live content is unavailable. Log in and register to view live content