Timezone: »

 
Poster
Self-Attention Between Datapoints: Going Beyond Individual Input-Output Pairs in Deep Learning
Jannik Kossen · Neil Band · Clare Lyle · Aidan Gomez · Thomas Rainforth · Yarin Gal

Tue Dec 07 08:30 AM -- 10:00 AM (PST) @

We challenge a common assumption underlying most supervised deep learning: that a model makes a prediction depending only on its parameters and the features of a single input. To this end, we introduce a general-purpose deep learning architecture that takes as input the entire dataset instead of processing one datapoint at a time. Our approach uses self-attention to reason about relationships between datapoints explicitly, which can be seen as realizing non-parametric models using parametric attention mechanisms. However, unlike conventional non-parametric models, we let the model learn end-to-end from the data how to make use of other datapoints for prediction. Empirically, our models solve cross-datapoint lookup and complex reasoning tasks unsolvable by traditional deep learning models. We show highly competitive results on tabular data, early results on CIFAR-10, and give insight into how the model makes use of the interactions between points.

Author Information

Jannik Kossen (University of Oxford)
Neil Band (University of Oxford)
Clare Lyle (University of Oxford)
Aidan Gomez (University of Toronto)
Thomas Rainforth (University of Oxford)
Yarin Gal (University of Oxford)

More from the Same Authors