Skip to yearly menu bar Skip to main content


Poster

Reconciling meta-learning and continual learning with online mixtures of tasks

Ghassen Jerfel · Erin Grant · Tom Griffiths · Katherine Heller

East Exhibition Hall B + C #175

Keywords: [ Bayesian Nonparametrics ] [ Probabilistic Methods ] [ Meta-Learning ] [ Algorithms ]


Abstract:

Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not advantageous, for instance, when tasks are considerably dissimilar or change over time. We use the connection between gradient-based meta-learning and hierarchical Bayes to propose a Dirichlet process mixture of hierarchical Bayesian models over the parameters of an arbitrary parametric model such as a neural network. In contrast to consolidating inductive biases into a single set of hyperparameters, our approach of task-dependent hyperparameter selection better handles latent distribution shift, as demonstrated on a set of evolving, image-based, few-shot learning benchmarks.

Live content is unavailable. Log in and register to view live content