Timezone: »
We aim for source-free domain adaptation, where the task is to deploy a model pre-trained on source domains to target domains. The challenges stem from the distribution shift from the source to the target domain, coupled with the unavailability of any source data and labeled target data for optimization. Rather than fine-tuning the model by updating the parameters, we propose to perturb the source model to achieve adaptation to target domains. We introduce perturbations into the model parameters by variational Bayesian inference in a probabilistic framework. By doing so, we can effectively adapt the model to the target domain while largely preserving the discriminative ability. Importantly, we demonstrate the theoretical connection to learning Bayesian neural networks, which proves the generalizability of the perturbed model to target domains. To enable more efficient optimization, we further employ a parameter sharing strategy, which substantially reduces the learnable parameters compared to a fully Bayesian neural network. Our model perturbation provides a new probabilistic way for domain adaptation which enables efficient adaptation to target domains while maximally preserving knowledge in source models. Experiments on several source-free benchmarks under three different evaluation settings verify the effectiveness of the proposed variational model perturbation for source-free domain adaptation.
Author Information
Mengmeng Jing (University of Electronic Science and Technology of China)
Mengmeng Jing received the B.Eng. and M.Sc. degrees in 2015 and 2018, respectively, from the University of Electronic Science and Technology of China, Chengdu, China, where he is currently working on the Ph.D. degree with the School of Computer Science and Engineering. His research interests include machine learning, especially transfer learning, and subspace learning.
Xiantong Zhen (United Imaging Healthcare)
Jingjing Li (University of Electronic Science and Technology of China)
Cees Snoek (University of Amsterdam)
More from the Same Authors
-
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 : Self-Guided Diffusion Model »
TAO HU · David Zhang · Yuki Asano · Gertjan Burghouts · Cees Snoek -
2022 : Meta-Learning Makes a Better Multimodal Few-shot Learner »
Ivona Najdenkoska · Xiantong Zhen · Marcel Worring -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2023 Poster: Learn to Categorize or Categorize to Learn? Self-Coding for Generalized Category Discovery »
Sarah Rastegar · Hazel Doughty · Cees Snoek -
2023 Poster: ProtoDiff: Learning to Learn Prototypical Networks by Task-Guided Diffusion »
Yingjun Du · Zehao Xiao · Shengcai Liao · Cees Snoek -
2023 Poster: Diffusion-Based Probabilistic Uncertainty Estimation for Active Domain Adaptation »
Zhekai Du · Jingjing Li -
2023 Poster: Episodic Multi-Task Learning with Heterogeneous Neural Processes »
Jiayi Shen · Xiantong Zhen · Qi Wang · Marcel Worring -
2023 Poster: Learning Unseen Modality Interaction »
Yunhua Zhang · Hazel Doughty · Cees Snoek -
2022 : Unlocking Slot Attention by Changing Optimal Transport Costs »
Yan Zhang · David Zhang · Simon Lacoste-Julien · Gertjan Burghouts · Cees Snoek -
2022 Poster: Association Graph Learning for Multi-Task Classification with Category Shifts »
Jiayi Shen · Zehao Xiao · Xiantong Zhen · Cees Snoek · Marcel Worring -
2021 Poster: Learning to Learn Dense Gaussian Processes for Few-Shot Learning »
Ze Wang · Zichen Miao · Xiantong Zhen · Qiang Qiu -
2021 Poster: Variational Multi-Task Learning with Gumbel-Softmax Priors »
Jiayi Shen · Xiantong Zhen · Marcel Worring · Ling Shao -
2020 Poster: Learning to Learn Variational Semantic Memory »
Xiantong Zhen · Yingjun Du · Huan Xiong · Qiang Qiu · Cees Snoek · Ling Shao