Timezone: »

A Bayesian Perspective on Training Speed and Model Selection
Clare Lyle · Lisa Schut · Robin Ru · Yarin Gal · Mark van der Wilk

Thu Dec 10 09:00 AM -- 11:00 AM (PST) @ Poster Session 5 #1620

We take a Bayesian perspective to illustrate a connection between training speed and the marginal likelihood in linear models. This provides two major insights: first, that a measure of a model's training speed can be used to estimate its marginal likelihood. Second, that this measure, under certain conditions, predicts the relative weighting of models in linear model combinations trained to minimize a regression loss. We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks. We further provide encouraging empirical evidence that the intuition developed in these settings also holds for deep neural networks trained with stochastic gradient descent. Our results suggest a promising new direction towards explaining why neural networks trained with stochastic gradient descent are biased towards functions that generalize well.

Author Information

Clare Lyle (University of Oxford)
Lisa Schut (University of Oxford)
Robin Ru (Oxford University)
Yarin Gal (University of Oxford)
Mark van der Wilk (Imperial College)

More from the Same Authors