Skip to yearly menu bar Skip to main content


Breakout session
in
Competition: Competition Track Day 2: Overviews + Breakout Sessions

Breakout: MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains


Abstract:

Schedule (GMT Timezone)

  • 12:00 - 12:30 GMT: Reveal of the MetaDL winners and datasets
  • 12:30 - 13:00 GMT: Keynote: Frank Hutter, University of Freiburg

DL 2.0: How Meta-Learning May Power the Next Generation of Deep Learning

Deep Learning (DL) has been incredibly successful, due to its ability to automatically acquire useful representations from raw data by a joint optimization process of all layers. However, current DL practice still requires substantial manual efforts to define the right neural architecture and training hyperparameters to optimally learn these representations for the data at hand. The next logical step is to jointly optimize these components as well, based on a meta-level of learning and optimization. In this talk, I will discuss several advances towards this goal, focusing on (1) joint optimization of several meta-choices in the DL pipeline, (2) efficiency of this meta-optimization, and (3) optimization of uncertainty estimates and robustness to data shift.

  • 13:00 - 13:30 GMT: Invited talk by Team Meta Delta, Tsinghua University

MetaDelta++: Improve Generalization of Few-shot System Through Multi-Scale Pretrained Models and Improved Training Strategies

Meta-learning aims at learning quickly on novel tasks with limited data by transferring generic experience learned from previous tasks. Naturally, few-shot learning has been one of the most popular applications for meta-learning. Recently, an ensembled few-shot system MetaDelta is proposed to boost the performance, which won first place in the AAAI 2021 MetaDL challenge with leading performance. However, the generalization ability of MetaDelta is still limited by the homogeneous model setting and weak pretraining and fine-tuning strategies, hindering MetaDelta from being applied to more diverse scenarios and problems. We further boost the performance and generalization ability of MetaDelta by leveraging pre-trained models at multi-scale and improved training strategies, including semi-weakly supervised pretraining, data augmentation, separated learning rate at each layer, lazier BN statistics update, and better decoder design. Our system MetaDelta++ substantially boosts the performance and generalization abilities by a large margin and stands the 1st place in phase 1 of the NeurIPS 2021 MetaDL system with a large margin compared to MetaDelta and other teams.

  • 13:30 - 14:30 GMT: Tutorial on Meta-learning, by Isabelle Guyon, Zhengying Liu, Felix Mohr and Jan N. van RIjn

In this slot, we will reflect on some latest developments in Meta-learning. We will present several frameworks that capture the relation between various research directions in meta-learning and AutoML. More specifically, we will reflect on the role of meta-learning in the broader context of machine learning, and on the role of learning curves in AutoML.

  • 14:30 - 15:00 GMT: Wrap up session, discussion on follow-up competition

Chat is not available.