Skip to yearly menu bar Skip to main content


Poster

Lifelong Domain Adaptation via Consolidated Internal Distribution

Mohammad Rostami

Virtual

Keywords: [ Continual Learning ] [ Domain Adaptation ] [ Machine Learning ]


Abstract:

We develop an algorithm to address unsupervised domain adaptation (UDA) in continual learning (CL) settings. The goal is to update a model continually to learn distributional shifts across sequentially arriving tasks with unlabeled data while retaining the knowledge about the past learned tasks. Existing UDA algorithms address the challenge of domain shift, but they require simultaneous access to the datasets of the source and the target domains. On the other hand, existing works on CL can handle tasks with labeled data. Our solution is based on consolidating the learned internal distribution for improved model generalization on new domains and benefitting from experience replay to overcome catastrophic forgetting.

Chat is not available.