Skip to yearly menu bar Skip to main content


1st Oral Presentation
in
Workshop: Vision Transformers: Theory and applications

CLUDA : Contrastive Learning in Unsupervised Domain Adaptation for Semantic Segmentation

Midhun Vayyat · Kasi Jaswin · Anuraag Bhattacharya · Shuaib Ahmed · Rahul Tallamraju


Abstract:

In this work, we propose CLUDA, a simple, yet novelmethod for performing unsupervised domain adaptation(UDA) for semantic segmentation by incorporating con-trastive losses into a student-teacher learning paradigm,that makes use of pseudo-labels generated from the tar-get domain by the teacher network. More specifically, weextract a multi-level fused-feature map from the encoder,and apply contrastive loss across different classes and dif-ferent domains, via source-target mixing of images. Weconsistently improve performance on various feature en-coder architectures and for different domain adaptationdatasets in semantic segmentation. Furthermore, we intro-duce a learned-weighted contrastive loss to improve uponon a state-of-the-art multi-resolution training approachin UDA. We produce state-of-the-art results on GTA →Cityscapes (74.4 mIOU, +0.6) and Synthia → Cityscapes(67.2 mIOU, +1.4) datasets. CLUDA effectively demon-strates contrastive learning in UDA as a generic method,which can be easily integrated into any existing UDA forsemantic segmentation tasks. Please refer to the supple-mentary material for the details on implementation.

Chat is not available.