Skip to yearly menu bar Skip to main content


Eleni Traintafillou Invited Talk
in
Workshop: UniReps: Unifying Representations in Neural Models

Reusing pretrained models for learning and unlearning tasks


Abstract:

Recent progress in deep learning has been made by significantly increasing the size of models, making them in turn more data hungry and expensive to train. In this era, designing techniques for effectively reusing pretrained models for new tasks is an increasingly important research direction. In this talk, I will argue that, aside from the well-studied problem of inserting new information into pretrained models, or reusing them to learn new tasks (transfer learning, few-shot learning, domain adaptation, etc), it is equally important, and significantly less studied, to design methods for removing information from trained models (e.g. for preserving privacy, removing harmful biases, incorrect or outdated facts, etc). I will touch upon recent work in both directions, and discuss open research questions.

Chat is not available.