Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Meta-Learning

Meta-Learning Backpropagation And Improving It

Louis Kirsch


Abstract:

In the past a large number of variable update rules have been proposed for meta learning such as fast weights, hyper networks, learned learning rules, and meta recurrent neural networks. We unify these architectures by demonstrating that a single weight-sharing and sparsity principle underlies them that can be used to express complex learning algorithms. We propose a simple implementation of this principle, the Variable Shared Meta RNN, and demonstrate that it allows implementing neuronal dynamics and backpropagation solely by running the recurrent neural network in forward-mode. This offers a direction for backpropagation that is biologically plausible. Then we show how backpropagation itself can be further improved through meta-learning. That is, we can use a human-engineered algorithm as an initialization for meta-learning better learning algorithms.

Chat is not available.