Timezone: »

Adaptive Gradient-Based Meta-Learning Methods
Mikhail Khodak · Maria-Florina Balcan · Ameet Talwalkar

Tue Dec 10 05:30 PM -- 07:30 PM (PST) @ East Exhibition Hall B + C #41

We build a theoretical framework for designing and understanding practical meta-learning methods that integrates sophisticated formalizations of task-similarity with the extensive literature on online convex optimization and sequential prediction algorithms. Our approach enables the task-similarity to be learned adaptively, provides sharper transfer-risk bounds in the setting of statistical learning-to-learn, and leads to straightforward derivations of average-case regret bounds for efficient algorithms in settings where the task-environment changes dynamically or the tasks share a certain geometric structure. We use our theory to modify several popular meta-learning algorithms and improve their meta-test-time performance on standard problems in few-shot learning and federated learning.

Author Information

Misha Khodak (CMU)
Maria-Florina Balcan (Carnegie Mellon University)
Ameet Talwalkar (CMU)

More from the Same Authors