The program includes a wide variety of exciting competitions in different domains, with some focusing more on applications and others trying to unify fields, focusing on technical challenges or directly tackling important problems in the world. The aim is for the broad program to make it so that anyone who wants to work on or learn from a competition can find something to their liking.
In this session, we have the following competitions:
* Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality
* Reconnaissance Blind Chess
* Real Robot Challenge II
* The Billion-Scale Approximate Nearest Neighbor Search Challenge
* MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains
Wed 2:00 a.m. - 2:05 a.m.
|
Introduction to Competition Day 2
(
Intro
)
|
Barbara Caputo 🔗 |
Wed 2:05 a.m. - 2:25 a.m.
|
Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality + Q&A
(
Talk
)
link »
SlidesLive Video » Control theory, reinforcement learning, and causality are all ways of mathematically describing how the world changes when we interact with it. Each field offers a different perspective with its own strengths and weaknesses. In this competition, we aim to bring together researchers from all three fields to encourage cross-disciplinary discussions. The competition is constructed to readily fit into the mathematical frameworks of all three fields and participants of any background are encouraged to participate. We designed two tracks that consider a dynamical system for which participants need to find controls/policies to optimally interact with a target process: an open loop/bandit track and a closed loop/online RL track. |
Sebastian Weichwald · Niklas Pfister · Dominik Baumann · Isabelle Guyon · Oliver Kroemer · Tabitha Lee · Søren Wengel Mogensen · Jonas Peters · Sebastian Trimpe 🔗 |
Wed 2:24 a.m. - 5:24 a.m.
|
Breakout: Learning By Doing: Controlling a Dynamical System using Control Theory, Reinforcement Learning, or Causality
(
Breakout session
)
Schedule (GMT Timezone)
|
🔗 |
Wed 2:25 a.m. - 2:45 a.m.
|
Reconnaissance Blind Chess + Q&A
(
Talk
)
link »
SlidesLive Video » Reconnaissance Blind Chess is like chess except a player cannot see her opponent's pieces in general. Rather, each player chooses a 3x3 square of the board to privately observe each turn. Algorithms used to create agents for previous games like chess, Go, and poker break down in Reconnaissance Blind Chess for several reasons including the imperfect information, absence of obvious abstractions, and lack of common knowledge. In addition to this NeurIPS competition, the game is recently part of the new Hidden Information Games Competition (HIGC) that is organized with the AAAI Reinforcement Learning in Games workshop (2022). Build the best bot for this challenge in making strong decisions in multi-agent scenarios in the face of uncertainty. |
Ryan Gardner · Gino Perrotta · Corey Lowman · Casey Richardson · Andrew Newman · Jared Markowitz · Nathan Drenkow · Bart Paulhamus · Ashley J Llorens · Todd Neller · Raman Arora · Bo Li · Mykel J Kochenderfer
|
Wed 2:44 a.m. - 5:44 a.m.
|
Breakout: Reconnaissance Blind Chess
(
Breakout session
)
|
🔗 |
Wed 2:45 a.m. - 3:05 a.m.
|
Real Robot Challenge II + Q&A
(
Talk
)
link »
SlidesLive Video » Despite recent successes of reinforcement learning (RL) in simulated environments, deploying or training algorithms in the real-world remains a challenge due to the significant cost of experimentation and limited datasets. While insights gained in simulation do not necessarily translate to real robots, we aim to close the gap between simulation and the real-world by offering participants the opportunity to submit their algorithm to a robotics benchmark in the cloud. This will allow teams to gather hundreds of hours of real robot data with minimal effort and submission to our cloud benchmark is as easy as using a simulator. Simulators, easy to use interfaces and large real-world datasets for pretraining are available. Show that your algorithm is practical by solving the tasks on different levels in the real-world and win prizes! |
Stefan Bauer · Joel Akpo · Manuel Wuethrich · Nan Rosemary Ke · Anirudh Goyal · Thomas Steinbrenner · Felix Widmaier · Annika Buchholz · Bernhard Schölkopf · Dieter Büchler · Ludovic Righetti · Franziska Meier
|
Wed 3:04 a.m. - 6:04 a.m.
|
Breakout: Real Robot Challenge II
(
Breakout session
)
|
🔗 |
Wed 3:05 a.m. - 3:25 a.m.
|
Billion-Scale Approximate Nearest Neighbor Search Challenge + Q&A
(
Talk
)
link »
SlidesLive Video » Approximate Nearest Neighbor Search (ANNS) amounts to finding nearby points to a given query point in a high-dimensional vector space. ANNS algorithms optimize a tradeoff between search speed, memory usage and accuracy with respect to an exact sequential search. Thanks to efforts like ann-benchmarks.com, the state of the art for ANNS on million-scale datasets is quite clear. This competition aims at pushing the scale to out-of-memory billion-scale datasets and other hardware configurations that are realistic in many current applications. The competition uses six representative billion-scale datasets -- many newly released for this competition -- with their associated accuracy metrics. There are three tracks depending on hardware settings: (T1) limited memory (T2) limited main memory + SSD (T3) any hardware configuration including accelerators and custom silicon. We will use two recent indexing algorithms, DiskANN and FAISS, as baselines for tracks T1 and T2. The anticipated impact is an understanding of the ideas that apply at a billion-point scale, bridging communities that work on ANNS problems, and a platform for newer researchers to contribute and develop this relatively new research area. We will provide Azure cloud compute credit to participants with promising ideas without necessary infrastructure to develop their submissions. |
Harsha Vardhan Simhadri · George Williams · Martin Aumüller · Artem Babenko · Dmitry Baranchuk · Qi Chen · Matthijs Douze · Ravishankar Krishnawamy · Gopal Srinivasa · Suhas Jayaram Subramanya · Jingdong Wang
|
Wed 3:24 a.m. - 6:24 a.m.
|
Breakout: Billion-Scale Approximate Nearest Neighbor Search Challenge
(
Breakout session
)
Schedule (GMT Timezone)
|
🔗 |
Wed 3:25 a.m. - 3:45 a.m.
|
MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains + Q&A
(
Talk
)
link »
SlidesLive Video » Meta-learning is an important machine learning paradigm leveraging experience from previous tasks to make better predictions on the task at hand. This competition focuses on supervised learning, and more particularly `few shot learning' classification settings, aiming at learning a good model from very few examples, typically 1 to 5 per class. A starting kit will be provided, consisting of a public dataset and various baseline implementations, including MAML (Finn et al., 2017) and Prototypical Networks (Snell et al., 2017). This way, it should be easy to get started and build upon the various resources in the field. The competition consists of novel datasets from various domains, including healthcare, ecology, biology, and chemistry. The competition will consist of three phases: a public phase, a feedback phase, and a final phase. The last two phases will be run with code submissions, fully bind-tested on the Codalab challenge platform. A single (final) submission will be evaluated during the final phase, using five fresh datasets, currently unknown to the meta-learning community. |
Adrian El Baz · Isabelle Guyon · Zhengying Liu · Jan N. Van Rijn · Haozhe Sun · Sébastien Treguer · Wei-Wei Tu · Ihsan Ullah · Joaquin Vanschoren · Phan Ahn Vu 🔗 |
Wed 3:44 a.m. - 6:44 a.m.
|
Breakout: MetaDL: Few Shot Learning Competition with Novel Datasets from Practical Domains
(
Breakout session
)
Schedule (GMT Timezone)
DL 2.0: How Meta-Learning May Power the Next Generation of Deep Learning Deep Learning (DL) has been incredibly successful, due to its ability to automatically acquire useful representations from raw data by a joint optimization process of all layers. However, current DL practice still requires substantial manual efforts to define the right neural architecture and training hyperparameters to optimally learn these representations for the data at hand. The next logical step is to jointly optimize these components as well, based on a meta-level of learning and optimization. In this talk, I will discuss several advances towards this goal, focusing on (1) joint optimization of several meta-choices in the DL pipeline, (2) efficiency of this meta-optimization, and (3) optimization of uncertainty estimates and robustness to data shift.
MetaDelta++: Improve Generalization of Few-shot System Through Multi-Scale Pretrained Models and Improved Training Strategies Meta-learning aims at learning quickly on novel tasks with limited data by transferring generic experience learned from previous tasks. Naturally, few-shot learning has been one of the most popular applications for meta-learning. Recently, an ensembled few-shot system MetaDelta is proposed to boost the performance, which won first place in the AAAI 2021 MetaDL challenge with leading performance. However, the generalization ability of MetaDelta is still limited by the homogeneous model setting and weak pretraining and fine-tuning strategies, hindering MetaDelta from being applied to more diverse scenarios and problems. We further boost the performance and generalization ability of MetaDelta by leveraging pre-trained models at multi-scale and improved training strategies, including semi-weakly supervised pretraining, data augmentation, separated learning rate at each layer, lazier BN statistics update, and better decoder design. Our system MetaDelta++ substantially boosts the performance and generalization abilities by a large margin and stands the 1st place in phase 1 of the NeurIPS 2021 MetaDL system with a large margin compared to MetaDelta and other teams.
In this slot, we will reflect on some latest developments in Meta-learning. We will present several frameworks that capture the relation between various research directions in meta-learning and AutoML. More specifically, we will reflect on the role of meta-learning in the broader context of machine learning, and on the role of learning curves in AutoML.
|
🔗 |