Timezone: »

 
Workshop
Workshop on Meta-Learning
Roberto Calandra · Frank Hutter · Hugo Larochelle · Sergey Levine

Sat Dec 09 08:00 AM -- 06:30 PM (PST) @ Hyatt Beacon Ballroom D+E+F+H
Event URL: http://metalearning.ml/ »

Recent years have seen rapid progress in meta-learning methods, which learn (and optimize) the performance of learning methods based on data, generate new learning methods from scratch, and learn to transfer knowledge across tasks and domains. Meta-learning can be seen as the logical conclusion of the arc that machine learning has undergone in the last decade, from learning classifiers, to learning representations, and finally to learning algorithms that themselves acquire representations and classifiers. The ability to improve one’s own learning capabilities through experience can also be viewed as a hallmark of intelligent beings, and there are strong connections with work on human learning in neuroscience.

Meta-learning methods are also of substantial practical interest, since they have, e.g., been shown to yield new state-of-the-art automated machine learning methods, novel deep learning architectures, and substantially improved one-shot learning systems.

Some of the fundamental questions that this workshop aims to address are:
- What are the fundamental differences in the learning “task” compared to traditional “non-meta” learners?
- Is there a practical limit to the number of meta-learning layers (e.g., would a meta-meta-meta-learning algorithm be of practical use)?
- How can we design more sample-efficient meta-learning methods?
- How can we exploit our domain knowledge to effectively guide the meta-learning process?
- What are the meta-learning processes in nature (e.g, in humans), and how can we take inspiration from them?
- Which ML approaches are best suited for meta-learning, in which circumstances, and why?
- What principles can we learn from meta-learning to help us design the next generation of learning systems?

The goal of this workshop is to bring together researchers from all the different communities and topics that fall under the umbrella of meta-learning. We expect that the presence of these different communities will result in a fruitful exchange of ideas and stimulate an open discussion about the current challenges in meta-learning, as well as possible solutions.

In terms of prospective participants, our main targets are machine learning researchers interested in the processes related to understanding and improving current meta-learning algorithms. Specific target communities within machine learning include, but are not limited to: meta-learning, optimization, deep learning, reinforcement learning, evolutionary computation, Bayesian optimization and AutoML. Our invited speakers also include researchers who study human learning, to provide a broad perspective to the attendees.

Sat 8:30 a.m. - 8:40 a.m. [iCal]
Introduction and opening remarks (Introduction)
Roberto Calandra
Sat 8:40 a.m. - 9:10 a.m. [iCal]
Learning to optimize with reinforcement learning (Talk)
Jitendra Malik
Sat 9:10 a.m. - 9:40 a.m. [iCal]
Informing the Use of Hyperparameter Optimization Through Metalearning (Talk)
Christophe Giraud-Carrier
Sat 9:40 a.m. - 10:00 a.m. [iCal]
Poster Spotlight (Spotlight)
Sat 10:00 a.m. - 11:00 a.m. [iCal]
Poster session (and Coffee Break) (Poster Session)
Jacob Andreas, Kun Li, Conner Vercellino, Thomas Miconi, Wenpeng Zhang, Luca Franceschi, Zheng Xiong, Karim Ahmed, Laurent Itti, Tim Klinger, Mostafa Rohaninejad
Sat 11:00 a.m. - 11:30 a.m. [iCal]
Invited talk: Jane Wang (Talk)
Sat 11:30 a.m. - 12:00 p.m. [iCal]

Meta-learning holds the promise of enabling machine learning systems to replace manual engineering of hyperparameters and architectures, effectively reuse data across tasks, and quickly adapt to unexpected scenarios. In this talk, I will present a unified view of the meta-learning problem, discussing how a variety of approaches attempt to solve the problem, and when we might prefer some approaches over others. Further, I will discuss interesting theoretical and empirical properties of the model-agnostic meta-learning algorithm. Finally, I will conclude by showing new results on learning to learn from weak supervision with applications in imitation learning on a real robot and human-like concept acquisition.

Chelsea Finn
Sat 1:30 p.m. - 2:00 p.m. [iCal]
Learn to learn high-dimensional models from few examples (Talk)
Josh Tenenbaum
Sat 2:00 p.m. - 2:15 p.m. [iCal]
Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start (Contributed Talk)
Sat 2:15 p.m. - 2:30 p.m. [iCal]
Learning to Model the Tail (Contributed Talk)
Sat 2:30 p.m. - 3:30 p.m. [iCal]
Poster session (and Coffee Break) (Poster Session)
Sat 3:30 p.m. - 4:00 p.m. [iCal]

In this talk I'll cover some recent work on few shot learning which we did at DeepMind. I'll describe how the work in MANN and Matching Networks influenced our most recent work on few shot learning for distributions, "Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions".

Oriol Vinyals
Sat 4:00 p.m. - 5:00 p.m. [iCal]
Panel Discussion

Author Information

Roberto Calandra (Facebook AI Research)
Frank Hutter (University of Freiburg)

Frank Hutter is a Full Professor for Machine Learning at the Computer Science Department of the University of Freiburg (Germany), where he previously was an assistant professor 2013-2017. Before that, he was at the University of British Columbia (UBC) for eight years, for his PhD and postdoc. Frank's main research interests lie in machine learning, artificial intelligence and automated algorithm design. For his 2009 PhD thesis on algorithm configuration, he received the CAIAC doctoral dissertation award for the best thesis in AI in Canada that year, and with his coauthors, he received several best paper awards and prizes in international competitions on machine learning, SAT solving, and AI planning. Since 2016 he holds an ERC Starting Grant for a project on automating deep learning based on Bayesian optimization, Bayesian neural networks, and deep reinforcement learning.

Hugo Larochelle (Twitter)
Sergey Levine (UC Berkeley)

More from the Same Authors