Timezone: »

Modular Meta-Learning with Shrinkage
Yutian Chen · Abram Friesen · Feryal Behbahani · Arnaud Doucet · David Budden · Matthew Hoffman · Nando de Freitas

Wed Dec 09 09:00 PM -- 11:00 PM (PST) @ Poster Session 4 #1207

Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task- specific components. Updating only these task-specific modules then allows the model to be adapted to low-data tasks for as many steps as necessary without risking overfitting. Unfortunately, existing meta-learning methods either do not scale to long adaptation or else rely on handcrafted task-specific architectures. Here, we propose a meta-learning approach that obviates the need for this often sub-optimal hand-selection. In particular, we develop general techniques based on Bayesian shrinkage to automatically discover and learn both task-specific and general reusable modules. Empirically, we demonstrate that our method discovers a small set of meaningful task-specific modules and outperforms existing meta- learning approaches in domains like few-shot text-to-speech that have little task data and long adaptation horizons. We also show that existing meta-learning methods including MAML, iMAML, and Reptile emerge as special cases of our method.

Author Information

Yutian Chen (DeepMind)
Abe Friesen (DeepMind)
Feryal Behbahani (DeepMind)
Arnaud Doucet (Google DeepMind)
David Budden (DeepMind)
Matthew Hoffman (DeepMind)
Nando de Freitas (DeepMind)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors