Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning Safety

On the Abilities of Mathematical Extrapolation with Implicit Models

Juliette Decugis · Alicia Tsai · Ashwin Ganesh · Max Emerling · Laurent El Ghaoui


Abstract:

Deep neural networks excel on a variety of different tasks, often surpassing human intelligence. However, when presented with out-of-distribution data, these models tend to break down even on the simplest tasks. In this paper, we compare the robustness of implicitly-defined and classical deep learning models on a series of mathematical extrapolation tasks, where the models are tested with out-of-distribution samples during inference time. Throughout our experiments, implicit models greatly outperform classical deep learning networks that overfit the training distribution. We present implicit models as a safer deep learning framework for generalization due to their flexible and selective structure. Implicit models, with potentially unlimited depth, not only adapt well to out-of-distribution data but also understand the underlying structure of inputs much better.

Chat is not available.