Timezone: »
Recent years have seen rapid progress in metalearning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Metalearning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience. The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of metalearning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in metalearning, as well as possible solutions.
Fri 9:00 a.m. - 9:10 a.m.
|
Opening Remarks
|
|
Fri 9:10 a.m. - 9:40 a.m.
|
Meta-learning as hierarchical modeling
(Talk)
|
Erin Grant |
Fri 9:40 a.m. - 10:10 a.m.
|
How Meta-Learning Could Help Us Accomplish Our Grandest AI Ambitions, and Early, Exotic Steps in that Direction
(Talk)
»
A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. To produce AI-GAs, we need work on Three Pillars: meta-learning architectures, meta-learning learning algorithms, and automatically generating environments. In this talk I will present recent work from our team in each of the three pillars: Pillar 1: Generative Teaching Networks (GTNs); Pillar 2: Differential plasticity, differentiable neuromodulated plasticity (“backpropamine”), and a Neuromodulated Meta-Learning algorithm (ANML); Pillar 3: the Paired Open-Ended Trailblazer (POET). My goal is to motivate future research into each of the three pillars and their combination. |
Jeff Clune |
Fri 10:10 a.m. - 10:30 a.m.
|
Poster Spotlights 1
(Spotlight)
|
|
Fri 10:30 a.m. - 11:30 a.m.
|
Coffee/Poster session 1
(Poster Session)
|
Shiro Takagi, Khurram Javed, Johanna Sommer, Amr Sharaf, Pierluca D'Oro, Ying Wei, Sivan Doveh, Colin White, Santiago Gonzalez, Cuong Nguyen, Mao Li, Tianhe (Kevin) Yu, Tiago Ramalho, Masahiro Nomura, Ahsan Alvi, Jean-Francois Ton, W. Ronny Huang, Jessica Lee, Sebastian Flennerhag, Michael Zhang, Abe Friesen, Paul Blomstedt, Alina Dubatovka, Sergey Bartunov, Subin Yi, Iaroslav Shcherbatyi, Christian Simon, Zeyuan Shang, David MacLeod, Lu Liu, Liam Fowl, Diego Mesquita, Deirdre Quillen
|
Fri 11:30 a.m. - 12:00 p.m.
|
Interaction of Model-based RL and Meta-RL
(Talk)
|
Pieter Abbeel |
Fri 12:00 p.m. - 12:30 p.m.
|
Discussion 1
(Discussion Panel)
|
|
Fri 2:00 p.m. - 2:30 p.m.
|
Abstraction & Meta-Reinforcement Learning
(Talk)
»
Reinforcement learning is hard in a fundamental sense: even in finite and deterministic environments, it can take a large number of samples to find a near-optimal policy. In this talk, I discuss the role that abstraction can play in achieving reliable yet efficient learning and planning. I first introduce classes of state abstraction that induce a trade-off between optimality and the size of an agent’s resulting abstract model, yielding a practical algorithm for learning useful and compact representations from a demonstrator. Moreover, I show how these learned, simple representations can underlie efficient learning in complex environments. Second, I analyze the problem of searching for options that make planning more efficient. I present new computational complexity results that illustrate it is NP-hard to find the optimal options that minimize planning time, but show this set can be approximated in polynomial time. Collectively, these results provide a partial path toward abstractions that minimize the difficulty of high quality learning and decision making. |
Dave Abel |
Fri 2:30 p.m. - 3:00 p.m.
|
Scalable Meta-Learning
(Talk)
|
Raia Hadsell |
Fri 3:00 p.m. - 3:20 p.m.
|
Poster Spotlights 2
(Spotlight)
|
|
Fri 3:20 p.m. - 4:30 p.m.
|
Coffee/Poster session 2
(Poster Session)
|
Xingyou Song, Puneet Mangla, David Salinas, Zhenxun Zhuang, Leo Feng, Shell Hu, Raul Puri, Wesley J Maddox, Aniruddh Raghu, Prudencio Tossou, Mingzhang Yin, Ishita Dasgupta, Kangwook Lee, Ferran Alet, Zhen Xu, Jörg Franke, James Harrison, Jonathan Warrell, Guneet S Dhillon, Arber Zela, Xin Qiu, Julien Niklas Siems, Russell Mendonca, Louis Schlessinger, Jeffrey Li, Georgiana Manolache, Debo Dutta, Lucas Glass, Abhishek Singh, Gregor Koehler
|
Fri 4:30 p.m. - 4:45 p.m.
|
Contributed Talk 1: Meta-Learning with Warped Gradient Descent (Sebastian Flennerhag)
(Talk)
|
|
Fri 4:45 p.m. - 5:00 p.m.
|
Contributed Talk 2: MetaPix: Few-shot video retargeting (Jessica Lee)
(Talk)
|
|
Fri 5:00 p.m. - 5:30 p.m.
|
Compositional generalization in minds and machines
(Talk)
»
People learn in fast and flexible ways that elude the best artificial neural networks. Once a person learns how to “dax,” they can effortlessly understand how to “dax twice” or “dax vigorously” thanks to their compositional skills. In this talk, we examine how people and machines generalize compositionally in language-like instruction learning tasks. Artificial neural networks have long been criticized for lacking systematic compositionality (Fodor & Pylshyn, 1988; Marcus, 1998), but new architectures have been tackling increasingly ambitious language tasks. In light of these developments, we reevaluate these classic criticisms and find that artificial neural nets still fail spectacularly when systematic compositionality is required. We then show how people succeed in similar few-shot learning tasks and find they utilize three inductive biases that can be incorporated into models. Finally, we show how more structured neural nets can acquire compositional skills and human-like inductive biases through meta-learning. |
Brenden Lake |
Fri 5:30 p.m. - 5:50 p.m.
|
Discussion 2
(Discussion Panel)
|
Author Information
Roberto Calandra (Facebook AI Research)
Ignasi Clavera Gilaberte (UC Berkeley)
Frank Hutter (University of Freiburg & Bosch)
Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), where he previously was an assistant professor 2013-2017. Before that, he was at the University of British Columbia (UBC) for eight years, for his PhD and postdoc. Frank's main research interests lie in machine learning, artificial intelligence and automated algorithm design. For his 2009 PhD thesis on algorithm configuration, he received the CAIAC doctoral dissertation award for the best thesis in AI in Canada that year, and with his coauthors, he received several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. Since 2016 he holds an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning.
Joaquin Vanschoren (Eindhoven University of Technology, OpenML)
Joaquin Vanschoren is an Assistant Professor in Machine Learning at the Eindhoven University of Technology. He holds a PhD from the Katholieke Universiteit Leuven, Belgium. His research focuses on meta-learning and understanding and automating machine learning. He founded and leads OpenML.org, a popular open science platform that facilitates the sharing and reuse of reproducible empirical machine learning data. He obtained several demo and application awards and has been invited speaker at ECDA, StatComp, IDA, AutoML@ICML, CiML@NIPS, AutoML@PRICAI, MLOSS@NIPS, and many other occasions, as well as tutorial speaker at NIPS and ECMLPKDD. He was general chair at LION 2016, program chair of Discovery Science 2018, demo chair at ECMLPKDD 2013, and co-organizes the AutoML and meta-learning workshop series at NIPS 2018, ICML 2016-2018, ECMLPKDD 2012-2015, and ECAI 2012-2014. He is also editor and contributor to the book 'Automatic Machine Learning: Methods, Systems, Challenges'.
Jane Wang (DeepMind)
Jane Wang is a research scientist at DeepMind on the neuroscience team, working on meta-reinforcement learning and neuroscience-inspired artificial agents. Her background is in physics, complex systems, and computational and cognitive neuroscience.
More from the Same Authors
-
2020 Workshop: 3rd Robot Learning Workshop »
Masha Itkina · Alex Bewley · Roberto Calandra · Igor Gilitschenski · Julien PEREZ · Ransalu Senanayake · Markus Wulfmeier · Vincent Vanhoucke -
2020 Workshop: Meta-Learning »
Jane Wang · Joaquin Vanschoren · Erin Grant · Jonathan Schwarz · Francesco Visin · Jeff Clune · Roberto Calandra -
2020 Poster: Re-Examining Linear Embeddings for High-Dimensional Bayesian Optimization »
Ben Letham · Roberto Calandra · Akshara Rai · Eytan Bakshy -
2020 Poster: Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning »
Younggyo Seo · Kimin Lee · Ignasi Clavera Gilaberte · Thanard Kurutach · Jinwoo Shin · Pieter Abbeel -
2020 Poster: 3D Shape Reconstruction from Vision and Touch »
Edward Smith · Roberto Calandra · Adriana Romero · Georgia Gkioxari · David Meger · Jitendra Malik · Michal Drozdzal -
2020 Tutorial: (Track1) Where Neuroscience meets AI (And What’s in Store for the Future) »
Jane Wang · Kevin Miller · Adam Marblestone -
2019 Workshop: Robot Learning: Control and Interaction in the Real World »
Roberto Calandra · Markus Wulfmeier · Kate Rakelly · Sanket Kamthe · Danica Kragic · Stefan Schaal · Markus Wulfmeier -
2019 Poster: Meta-Surrogate Benchmarking for Hyperparameter Optimization »
Aaron Klein · Zhenwen Dai · Frank Hutter · Neil Lawrence · Javier González -
2018 Workshop: NIPS 2018 Workshop on Meta-Learning »
Joaquin Vanschoren · Frank Hutter · Sachin Ravi · Jane Wang · Erin Grant -
2018 Poster: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Spotlight: Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models »
Kurtland Chua · Roberto Calandra · Rowan McAllister · Sergey Levine -
2018 Poster: Maximizing acquisition functions for Bayesian optimization »
James Wilson · Frank Hutter · Marc Deisenroth (he/him) -
2018 Tutorial: Automatic Machine Learning »
Frank Hutter · Joaquin Vanschoren -
2017 Workshop: Workshop on Meta-Learning »
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine -
2016 Workshop: Bayesian Optimization: Black-box Optimization and Beyond »
Roberto Calandra · Bobak Shahriari · Javier Gonzalez · Frank Hutter · Ryan Adams -
2016 Poster: Bayesian Optimization with Robust Bayesian Neural Networks »
Jost Tobias Springenberg · Aaron Klein · Stefan Falkner · Frank Hutter -
2016 Oral: Bayesian Optimization with Robust Bayesian Neural Networks »
Jost Tobias Springenberg · Aaron Klein · Stefan Falkner · Frank Hutter -
2015 Workshop: Bayesian Optimization: Scalability and Flexibility »
Bobak Shahriari · Ryan Adams · Nando de Freitas · Amar Shah · Roberto Calandra -
2015 Poster: Efficient and Robust Automated Machine Learning »
Matthias Feurer · Aaron Klein · Katharina Eggensperger · Jost Springenberg · Manuel Blum · Frank Hutter