Timezone: »
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source pretrained model is adapted to the target domain without access to the source data. Existing techniques mainly leverage self-supervised pseudo-labeling to achieve class-wise global alignment [1] or rely on local structure extraction that encourages the feature consistency among neighborhoods [2]. While impressive progress has been made, both lines of methods have their own drawbacks – the “global” approach is sensitive to noisy labels while the “local” counterpart suffers from the source bias. In this paper, we present Divide and Contrast (DaC), a new paradigm for SFUDA that strives to connect the good ends of both worlds while bypassing their limitations. Based on the prediction confidence of the source model, DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals under an adaptive contrastive learning framework. Specifically, the source-like samples are utilized for learning global class clustering thanks to their relatively clean labels. The more noisy target-specific data are harnessed at the instance level for learning the intrinsic local structures. We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch. Extensive experiments on VisDA, Office-Home, and the more challenging DomainNet have verified the superior performance of DaC over current state-of-the-art approaches. The code is available at https://github.com/ZyeZhang/DaC.git.
Author Information
Ziyi Zhang (南京大学人工智能学院LAMDA实验室)
Weikai Chen (USC Institute for Creative Technology)
Hui Cheng (SUN YAT-SEN UNIVERSITY)
Zhen Li (Chinese University of Hong Kong, Shenzhen)
Siyuan Li (Westlake University)
Liang Lin (Sun Yat-Sen University)
Guanbin Li (Sun Yat-sen University)
More from the Same Authors
-
2021 : Geometric Question Answering Towards Multimodal Numerical Reasoning »
Jiaqi Chen · Jianheng Tang · Jinghui Qin · Xiaodan Liang · Lingbo Liu · Eric Xing · Liang Lin -
2022 Poster: Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis »
Xu Yan · Heshen Zhan · Chaoda Zheng · Jiantao Gao · Ruimao Zhang · Shuguang Cui · Zhen Li -
2022 Spotlight: Divide and Contrast: Source-free Domain Adaptation via Adaptive Contrastive Learning »
Ziyi Zhang · Weikai Chen · Hui Cheng · Zhen Li · Siyuan Li · Liang Lin · Guanbin Li -
2022 Spotlight: Lightning Talks 3A-3 »
Xu Yan · Zheng Dong · Qiancheng Fu · Jing Tan · Hezhen Hu · Fukun Yin · Weilun Wang · Ke Xu · Heshen Zhan · Wen Liu · Qingshan Xu · Xiaotong Zhao · Chaoda Zheng · Ziheng Duan · Zilong Huang · Xintian Shi · Wengang Zhou · Yew Soon Ong · Pei Cheng · Hujun Bao · Houqiang Li · Wenbing Tao · Jiantao Gao · Bin Kang · Weiwei Xu · Limin Wang · Ruimao Zhang · Tao Chen · Gang Yu · Rynson Lau · Shuguang Cui · Zhen Li -
2022 Spotlight: Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis »
Xu Yan · Heshen Zhan · Chaoda Zheng · Jiantao Gao · Ruimao Zhang · Shuguang Cui · Zhen Li -
2022 Poster: Structure-Preserving 3D Garment Modeling with Neural Sewing Machines »
Xipeng Chen · Guangrun Wang · Dizhong Zhu · Xiaodan Liang · Philip Torr · Liang Lin -
2022 Poster: HSDF: Hybrid Sign and Distance Field for Modeling Surfaces with Arbitrary Topologies »
Li Wang · jie Yang · Weikai Chen · Xiaoxu Meng · Bo Yang · Jintao Li · Lin Gao -
2022 Poster: AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation »
Yuanfeng Ji · Haotian Bai · Chongjian GE · Jie Yang · Ye Zhu · Ruimao Zhang · Zhen Li · Lingyan Zhanng · Wanling Ma · Xiang Wan · Ping Luo -
2021 Poster: Rethinking the Pruning Criteria for Convolutional Neural Network »
Zhongzhan Huang · Wenqi Shao · Xinjiang Wang · Liang Lin · Ping Luo -
2021 Poster: OctField: Hierarchical Implicit Functions for 3D Modeling »
Jia-Heng Tang · Weikai Chen · jie Yang · Bo Wang · Songrun Liu · Bo Yang · Lin Gao -
2020 Poster: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation »
Yangxin Wu · Gengwei Zhang · Hang Xu · Xiaodan Liang · Liang Lin -
2019 Poster: Learning to Infer Implicit Surfaces without 3D Supervision »
Shichen Liu · Shunsuke Saito · Weikai Chen · Hao Li -
2018 Poster: Symbolic Graph Reasoning Meets Convolutions »
Xiaodan Liang · Zhiting Hu · Hao Zhang · Liang Lin · Eric Xing -
2018 Poster: Hybrid Knowledge Routed Modules for Large-scale Object Detection »
ChenHan Jiang · Hang Xu · Xiaodan Liang · Liang Lin -
2018 Poster: Kalman Normalization: Normalizing Internal Representations Across Network Layers »
Guangrun Wang · jiefeng peng · Ping Luo · Xinjiang Wang · Liang Lin -
2014 Poster: Deep Joint Task Learning for Generic Object Extraction »
Xiaolong Wang · Liliang Zhang · Liang Lin · Zhujin Liang · Wangmeng Zuo