Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Machine Learning for Engineering Modeling, Simulation and Design

Differentiable Implicit Layers

Andreas Look · Simona Doneva · Melih Kandemir · Rainer Gemulla · Jan Peters


Abstract:

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

Chat is not available.