How Meta-Learning Could Help Us Accomplish Our Grandest AI Ambitions, and Early, Exotic Steps in that Direction
in
Workshop: Meta-Learning
Abstract
A dominant trend in machine learning is that hand-designed pipelines are replaced by higher-performing learned pipelines once sufficient compute and data are available. I argue that trend will apply to machine learning itself, and thus that the fastest path to truly powerful AI is to create AI-generating algorithms (AI-GAs) that on their own learn to solve the hardest AI problems. This paradigm is an all-in bet on meta-learning. To produce AI-GAs, we need work on Three Pillars: meta-learning architectures, meta-learning learning algorithms, and automatically generating environments. In this talk I will present recent work from our team in each of the three pillars: Pillar 1: Generative Teaching Networks (GTNs); Pillar 2: Differential plasticity, differentiable neuromodulated plasticity (“backpropamine”), and a Neuromodulated Meta-Learning algorithm (ANML); Pillar 3: the Paired Open-Ended Trailblazer (POET). My goal is to motivate future research into each of the three pillars and their combination.