This is the public, feature-limited version of the conference webpage. After Registration and login please visit the full version.

Workshop: Meta-Learning

Jane Wang, Joaquin Vanschoren, Erin Grant, Jonathan Schwarz, Francesco Visin, Jeff Clune, Roberto Calandra

2020-12-11T03:00:00-08:00 - 2020-12-11T12:00:00-08:00
Abstract: Recent years have seen rapid progress in meta-learning methods, which transfer knowledge across tasks and domains to learn new tasks more efficiently, optimize the learning process itself, and even generate new learning methods from scratch. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers and policies over hand-crafted features, to learning representations over which classifiers and policies operate, and finally to learning algorithms that themselves acquire representations, classifiers, and policies.

Meta-learning methods are of substantial practical interest. For instance, they have been shown to yield new state-of-the-art automated machine learning algorithms and architectures, and have substantially improved few-shot learning systems. Moreover, the ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in cognitive science and reward learning in neuroscience.

Chat

To ask questions please use rocketchat, available only upon registration and login.

Schedule

2020-12-11T03:00:00-08:00 - 2020-12-11T03:10:00-08:00
Introduction and opening remarks
2020-12-11T03:10:00-08:00 - 2020-12-11T03:11:00-08:00
Introduction for invited speaker, Frank Hutter
Jane Wang
2020-12-11T03:11:00-08:00 - 2020-12-11T03:36:00-08:00
Meta-learning neural architectures, initial weights, hyperparameters, and algorithm components
Frank Hutter
2020-12-11T03:36:00-08:00 - 2020-12-11T03:40:00-08:00
Q/A for invited talk #1
Frank Hutter
2020-12-11T03:40:00-08:00 - 2020-12-11T03:55:00-08:00
On episodes, Prototypical Networks, and few-shot learning
Steinar Laenen, Luca Bertinetto
2020-12-11T04:00:00-08:00 - 2020-12-11T05:00:00-08:00
Poster session #1
2020-12-11T05:00:00-08:00 - 2020-12-11T05:01:00-08:00
Introduction for invited speaker, Luisa Zintgraf
Francesco Visin
2020-12-11T05:01:00-08:00 - 2020-12-11T05:26:00-08:00
Exploration in meta-reinforcement learning
Luisa Zintgraf
2020-12-11T05:26:00-08:00 - 2020-12-11T05:30:00-08:00
Q/A for invited talk #2
Luisa Zintgraf
2020-12-11T05:30:00-08:00 - 2020-12-11T05:31:00-08:00
Introduction for invited speaker, Tim Hospedales
Jonathan Schwarz
2020-12-11T05:31:00-08:00 - 2020-12-11T05:56:00-08:00
Meta-Learning: Representations and Objectives
Timothy Hospedales
2020-12-11T05:56:00-08:00 - 2020-12-11T06:00:00-08:00
Q/A for invited talk #3
Timothy Hospedales
2020-12-11T06:00:00-08:00 - 2020-12-11T07:00:00-08:00
Break
2020-12-11T07:00:00-08:00 - 2020-12-11T08:00:00-08:00
Poster session #2
2020-12-11T08:00:00-08:00 - 2020-12-11T08:01:00-08:00
Introduction for invited speaker, Louis Kirsch
Joaquin Vanschoren
2020-12-11T08:01:00-08:00 - 2020-12-11T08:26:00-08:00
General meta-learning
Louis Kirsch
2020-12-11T08:26:00-08:00 - 2020-12-11T08:30:00-08:00
Q/A for invited talk #4
Louis Kirsch
2020-12-11T08:30:00-08:00 - 2020-12-11T08:31:00-08:00
Introduction for invited speaker, Fei-Fei Li
Erin Grant
2020-12-11T08:31:00-08:00 - 2020-12-11T08:56:00-08:00
Creating diverse tasks to catalyze robot learning
Li Fei-Fei
2020-12-11T08:56:00-08:00 - 2020-12-11T09:00:00-08:00
Q/A for invited talk #5
Li Fei-Fei
2020-12-11T09:00:00-08:00 - 2020-12-12T10:00:00-08:00
Poster session #3
2020-12-11T10:00:00-08:00 - 2020-12-11T10:01:00-08:00
Introduction for invited speaker, Kate Rakelly
Erin Grant
2020-12-11T10:01:00-08:00 - 2020-12-11T10:26:00-08:00
An inference perspective on meta-reinforcement learning
Kate Rakelly
2020-12-11T10:26:00-08:00 - 2020-12-11T10:30:00-08:00
Q/A for invited talk #6
Kate Rakelly
2020-12-11T10:30:00-08:00 - 2020-12-11T10:45:00-08:00
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan, David Sussillo, Luke Metz, Ruoxi Sun, Jascha Sohl-Dickstein
2020-12-11T10:45:00-08:00 - 2020-12-11T11:00:00-08:00
Bayesian optimization by density ratio estimation
Louis Tiao, Aaron Klein, Cedric Archambeau, Edwin Bonilla, Matthias W Seeger, Fabio Ramos
2020-12-11T11:00:00-08:00 - 2020-12-11T12:00:00-08:00
Panel discussion
Putting Theory to Work: From Learning Bounds to Meta-Learning Algorithms
Quentin Bouniot
Prototypical Region Proposal Networks for Few-shot Localization and Classification
Elliott Skomski
Defining Benchmarks for Continual Few-Shot Learning
Massimiliano Patacchiola
Decoupling Exploration and Exploitation in Meta-Reinforcement Learning without Sacrifices
Evan Liu
Is Support Set Diversity Necessary for Meta-Learning?
Oscar Li
MobileDets: Searching for Object Detection Architecture for Mobile Accelerators
Yunyang Xiong
Flexible Dataset Distillation: Learn Labels Instead of Images
Ondrej Bohdal
Continual Model-Based Reinforcement Learning with Hypernetworks
Yizhou Huang
Task Similarity Aware Meta Learning: Theory-inspired Improvement on MAML
Pan Zhou
Task Meta-Transfer from Limited Parallel Labels
Yiren Jian
Synthetic Petri Dish: A Novel Surrogate Model for Rapid Architecture Search
Aditya Rawal
Contextual HyperNetworks for Novel Feature Adaptation
Angus Lamb
Tailoring: encoding inductive biases by optimizing unsupervised objectives at prediction time
Ferran Alet
MPLP: Learning a Message Passing Learning Protocol
Ettore Randazzo
Meta-Learning Bayesian Neural Network Priors Based on PAC-Bayesian Theory
Jonas Rothfuss
How Important is the Train-Validation Split in Meta-Learning?
Yu Bai
Meta-Learning Initializations for Image Segmentation
Sean Hendryx
Open-Set Incremental Learning via Bayesian Prototypical Embeddings
John Willes
Learning not to learn: Nature versus nurture in silico
Rob Lange
Prior-guided Bayesian Optimization
Artur Souza
TaskSet: A Dataset of Optimization Tasks
Luke Metz
Exploring Representation Learning for Flexible Few-Shot Tasks
Mengye Ren
Hyperparameter Transfer Across Developer Adjustments
Danny Stoll
Towards Meta-Algorithm Selection
Alexander Tornede
Continual learning with direction-constrained optimization
Yunfei Teng
Meta-Learning of Compositional Task Distributions in Humans and Machines
Sreejan Kumar
Learning to Generate Noise for Multi-Attack Robustness
Divyam Madaan
A Meta-Learning Approach for Graph Representation Learning in Multi-Task Settings
Davide Buffelli
Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness
Robin Schmucker
Measuring few-shot extrapolation with program induction
Ferran Alet
NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search
Julien Siems
Model-Agnostic Graph Regularization for Few-Shot Learning
Ethan Z Shen
Uniform Priors for Meta-Learning
Samarth Sinha
Adaptive Risk Minimization: A Meta-Learning Approach for Tackling Group Shift
Marvin Zhang
Similarity of classification tasks
Cuong C Nguyen
HyperVAE: Variational Hyper-Encoding Network
Phuoc Nguyen
Meta-Learning via Hypernetworks
Dominic Zhao
Learning in Low Resource Modalities via Cross-Modal Generalization
Paul Pu Liang
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Eleni Triantafillou
Few-shot Sequence Learning with Transformers
Lajanugen Logeswaran
Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Suneel Belkhale
Data Augmentation for Meta-Learning
Renkun Ni
Pareto-efficient Acquisition Functions for Cost-Aware Bayesian Optimization
Gauthier Guinet
Few-Shot Unsupervised Continual Learning through Meta-Examples
Alessia Bertugli
Meta-Learning Backpropagation And Improving It
Louis Kirsch
MAster of PuPpets: Model-Agnostic Meta-Learning via Pre-trained Parameters for Natural Language Generation
ChienFu Lin