Timezone: »
We propose SWA-Gaussian (SWAG), a simple, scalable, and general purpose approach for uncertainty representation and calibration in deep learning. Stochastic Weight Averaging (SWA), which computes the first moment of stochastic gradient descent (SGD) iterates with a modified learning rate schedule, has recently been shown to improve generalization in deep learning. With SWAG, we fit a Gaussian using the SWA solution as the first moment and a low rank plus diagonal covariance also derived from the SGD iterates, forming an approximate posterior distribution over neural network weights; we then sample from this Gaussian distribution to perform Bayesian model averaging. We empirically find that SWAG approximates the shape of the true posterior, in accordance with results describing the stationary distribution of SGD iterates. Moreover, we demonstrate that SWAG performs well on a wide variety of tasks, including out of sample detection, calibration, and transfer learning, in comparison to many popular alternatives including variational inference, MC dropout, KFAC Laplace, and temperature scaling.
Author Information
Wesley J Maddox (New York University)
Pavel Izmailov (New York University)
Timur Garipov (MIT CSAIL)
Dmitry Vetrov (Higher School of Economics, Samsung AI Center, Moscow)
Andrew Gordon Wilson (New York University)
More from the Same Authors
-
2020 Poster: Bayesian Deep Learning and a Probabilistic Perspective of Generalization »
Andrew Wilson · Pavel Izmailov -
2020 Poster: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2020 Spotlight: On Power Laws in Deep Ensembles »
Ekaterina Lobacheva · Nadezhda Chirkova · Maxim Kodryan · Dmitry Vetrov -
2020 Poster: Learning Invariances in Neural Networks from Training Data »
Gregory Benton · Marc Finzi · Pavel Izmailov · Andrew Wilson -
2020 Poster: Why Normalizing Flows Fail to Detect Out-of-Distribution Data »
Polina Kirichenko · Pavel Izmailov · Andrew Wilson -
2019 Poster: The Implicit Metropolis-Hastings Algorithm »
Kirill Neklyudov · Evgenii Egorov · Dmitry Vetrov -
2019 Poster: Exact Gaussian Processes on a Million Data Points »
Ke Alexander Wang · Geoff Pleiss · Jacob Gardner · Stephen Tyree · Kilian Weinberger · Andrew Gordon Wilson -
2019 Poster: Function-Space Distributions over Kernels »
Gregory Benton · Wesley J Maddox · Jayson Salkey · Julio Albinati · Andrew Gordon Wilson -
2019 Poster: Importance Weighted Hierarchical Variational Inference »
Artem Sobolev · Dmitry Vetrov -
2019 Poster: A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models »
Maxim Kuznetsov · Daniil Polykovskiy · Dmitry Vetrov · Alex Zhebrak -
2018 Poster: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2018 Spotlight: Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs »
Timur Garipov · Pavel Izmailov · Dmitrii Podoprikhin · Dmitry Vetrov · Andrew Wilson -
2017 Poster: Structured Bayesian Pruning via Log-Normal Multiplicative Noise »
Kirill Neklyudov · Dmitry Molchanov · Arsenii Ashukha · Dmitry Vetrov -
2016 Poster: PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions »
Mikhail Figurnov · Aizhan Ibraimova · Dmitry Vetrov · Pushmeet Kohli -
2015 Poster: M-Best-Diverse Labelings for Submodular Energies and Beyond »
Alexander Kirillov · Dmytro Shlezinger · Dmitry Vetrov · Carsten Rother · Bogdan Savchynskyy -
2015 Poster: Tensorizing Neural Networks »
Alexander Novikov · Dmitrii Podoprikhin · Anton Osokin · Dmitry Vetrov