Skip to yearly menu bar Skip to main content


Poster

Learning Invariances using the Marginal Likelihood

Mark van der Wilk · Matthias Bauer · ST John · James Hensman

Room 210 #20

Keywords: [ Bayesian Theory ] [ Gaussian Processes ] [ Variational Inference ] [ Kernel Methods ]


Abstract:

In many supervised learning tasks, learning what changes do not affect the predic-tion target is as crucial to generalisation as learning what does. Data augmentationis a common way to enforce a model to exhibit an invariance: training data is modi-fied according to an invariance designed by a human and added to the training data.We argue that invariances should be incorporated the model structure, and learnedusing themarginal likelihood, which can correctly reward the reduced complexityof invariant models. We incorporate invariances in a Gaussian process, due to goodmarginal likelihood approximations being available for these models. Our maincontribution is a derivation for a variational inference scheme for invariant Gaussianprocesses where the invariance is described by a probability distribution that canbe sampled from, much like how data augmentation is implemented in practice

Live content is unavailable. Log in and register to view live content