This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Workshop: Meta-Learning

Jane Wang, Joaquin Vanschoren, Erin Grant, Jonathan Schwarz, Francesco Visin, Jeff Clune, Roberto Calandra

Fri, Dec 11th, 2020 @ 11:00 – 20:00 GMT
Abstract: Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies.

Meta-learning methods are of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.

Chat

To ask questions please use rocketchat, available only upon registration and login.

Schedule

11:00 – 11:10 GMT
Introduction and opening remarks
11:10 – 11:11 GMT
Introduction for invited speaker, Frank Hutter
Jane Wang
11:11 – 11:36 GMT
Meta-learning neural architectures, initial weights, hyperparameters, and algorithm components
Frank Hutter
11:36 – 11:40 GMT
Q/A for invited talk #1
Frank Hutter
11:40 – 11:55 GMT
On episodes, Prototypical Networks, and few-shot learning
Steinar Laenen, Luca Bertinetto
12:00 – 13:00 GMT
Poster session #1
13:00 – 13:01 GMT
Introduction for invited speaker, Luisa Zintgraf
Francesco Visin
13:01 – 13:26 GMT
Exploration in meta-reinforcement learning
Luisa Zintgraf
13:26 – 13:30 GMT
Q/A for invited talk #2
Luisa Zintgraf
13:30 – 13:31 GMT
Introduction for invited speaker, Tim Hospedales
Jonathan Schwarz
13:31 – 13:56 GMT
Meta-Learning: Representations and Objectives
Timothy Hospedales
13:56 – 14:00 GMT
Q/A for invited talk #3
Timothy Hospedales
14:00 – 15:00 GMT
Break
15:00 – 16:00 GMT
Poster session #2
16:00 – 16:01 GMT
Introduction for invited speaker, Louis Kirsch
Joaquin Vanschoren
16:01 – 16:26 GMT
General meta-learning
Louis Kirsch
16:26 – 16:30 GMT
Q/A for invited talk #4
Louis Kirsch
16:30 – 16:31 GMT
Introduction for invited speaker, Fei-Fei Li
Erin Grant
16:31 – 16:56 GMT
Creating diverse tasks to catalyze robot learning
Li Fei-Fei
16:56 – 17:00 GMT
Q/A for invited talk #5
Li Fei-Fei
Fri, Dec 11th @ 17:00 GMT – Sat, Dec 12th @ 18:00 GMT
Poster session #3
18:00 – 18:01 GMT
Introduction for invited speaker, Kate Rakelly
Erin Grant
18:01 – 18:26 GMT
An inference perspective on meta-reinforcement learning
Kate Rakelly
18:26 – 18:30 GMT
Q/A for invited talk #6
Kate Rakelly
18:30 – 18:45 GMT
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein
18:45 – 19:00 GMT
Bayesian optimization by density ratio estimation
Louis Tiao, Aaron Klein, Cedric Archambeau, Edwin Bonilla, Matthias W Seeger, Fabio Ramos
19:00 – 20:00 GMT
Panel discussion
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Quentin Bouniot
Prototypical Region Proposal Networks for Few-shot Localization and Classification
Elliott Skomski
Defining Benchmarks for Continual Few-Shot Learning
Massimiliano Patacchiola
Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices
Evan Liu
Is Support Set Diversity Necessary for Meta-Learning?
Oscar Li
MobileDets: Searching for Object Detection Architecture for Mobile Accelerators
Yunyang Xiong
Flexible Dataset Distillation: Learn Labels Instead of Images
Ondrej Bohdal
Continual Model-Based Reinforcement Learning with Hypernetworks
Yizhou Huang
Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML
Pan Zhou
Task Meta-Transfer from Limited Parallel Labels
Yiren Jian
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search
Aditya Rawal
Contextual HyperNetworks for Novel Feature Adaptation
Angus Lamb
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Ferran Alet
MPLP: Learning a Message Passing Learning Protocol
Ettore Randazzo
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory
Jonas Rothfuss
How Important is the Train-Validation Split in Meta-Learning?
Yu Bai
Meta-Learning Initializations for Image Segmentation
Sean Hendryx
Open-Set Incremental Learning via Bayesian Prototypical Embeddings
John Willes
Learning not to learn: Nature versus nurture in silico
Rob Lange
Prior-guided Bayesian Optimization
Artur Souza
TaskSet: A Dataset of Optimization Tasks
Luke Metz
Exploring Representation Learning for Flexible Few-Shot Tasks
Mengye Ren
Hyperparameter Transfer Across Developer Adjustments
Danny Stoll
Towards Meta-Algorithm Selection
Alexander Tornede
Continual learning with direction-constrained optimization
Yunfei Teng
Meta-Learning of Compositional Task Distributions in Humans and Machines
Sreejan Kumar
Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan
A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings
Davide Buffelli
Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness
Robin Schmucker
Measuring few-shot extrapolation with program induction
Ferran Alet
NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search
Julien Siems
Model-Agnostic Graph Regularization for Few-Shot Learning
Ethan Z Shen
Uniform Priors for Meta-Learning
Samarth Sinha
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Marvin Zhang
Similarity of classification tasks
Cuong C Nguyen
HyperVAE: Variational Hyper-Encoding Network
Phuoc Nguyen
Meta-Learning via Hypernetworks
Dominic Zhao
Learning in Low Resource Modalities via Cross-Modal Generalization
Paul Pu Liang
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Eleni Triantafillou
Few-shot Sequence Learning with Transformers
Lajanugen Logeswaran
Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Suneel Belkhale
Data Augmentation for Meta-Learning
Renkun Ni
Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization
Gauthier Guinet
Few-Shot Unsupervised Continual Learning through Meta-Examples
Alessia Bertugli
Meta-Learning Backpropagation And Improving It
Louis Kirsch
MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation
ChienFu Lin