Timezone: »
Ensembling remains a hugely popular method for increasing the performance of a given class of models. In the case of deep learning, the benefits of ensembling are often attributed to the diverse predictions of the individual ensemble members. Here we investigate a tradeoff between diversity and individual model performance, and find that--surprisingly--encouraging diversity during training almost always yields worse ensembles. We show that this tradeoff arises from the Jensen gap between the single model and ensemble losses, and show that Jensen gap is a natural measure of diversity for both the mean squared error and cross entropy loss functions. Our results suggest that to reduce the ensemble error, we should move away from efforts to increase predictive diversity, and instead we should construct ensembles from less diverse (but more accurate) component models.
Author Information
Taiga Abe (Columbia University)
Estefany Kelly Buchanan (Columbia University)
Geoff Pleiss (Columbia University)
John Cunningham (Columbia University)
More from the Same Authors
-
2022 : Reliability benchmarks for image segmentation »
Estefany Kelly Buchanan · Michael Dusenberry · Jie Ren · Kevin Murphy · Balaji Lakshminarayanan · Dustin Tran -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2022 Workshop: The Symbiosis of Deep Learning and Differential Equations II »
Michael Poli · Winnie Xu · Estefany Kelly Buchanan · Maryam Hosseini · Luca Celotti · Martin Magill · Ermal Rrapaj · Qiyao Wei · Stefano Massaroli · Patrick Kidger · Archis Joglekar · Animesh Garg · David Duvenaud -
2022 Workshop: Gaussian Processes, Spatiotemporal Modeling, and Decision-making Systems »
Alexander Terenin · Elizaveta Semenova · Geoff Pleiss · Zi Wang -
2022 Poster: Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome »
Elliott Gordon-Rodriguez · Thomas Quinn · John Cunningham -
2022 Poster: Posterior and Computational Uncertainty in Gaussian Processes »
Jonathan Wenger · Geoff Pleiss · Marvin Pförtner · Philipp Hennig · John Cunningham -
2022 Poster: Deep Ensembles Work, But Are They Necessary? »
Taiga Abe · Estefany Kelly Buchanan · Geoff Pleiss · Richard Zemel · John Cunningham -
2021 Workshop: The Symbiosis of Deep Learning and Differential Equations »
Luca Celotti · Kelly Buchanan · Jorge Ortiz · Patrick Kidger · Stefano Massaroli · Michael Poli · Lily Hu · Ermal Rrapaj · Martin Magill · Thorsteinn Jonsson · Animesh Garg · Murtadha Aldeer -
2021 Poster: The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective »
Geoff Pleiss · John Cunningham -
2021 Poster: Posterior Collapse and Latent Variable Non-identifiability »
Yixin Wang · David Blei · John Cunningham -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2020 Poster: Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking »
Anqi Wu · Estefany Kelly Buchanan · Matthew Whiteway · Michael Schartner · Guido Meijer · Jean-Paul Noel · Erica Rodriguez · Claire Everett · Amy Norovich · Evan Schaffer · Neeli Mishra · C. Daniel Salzman · Dora Angelaki · Andrés Bendesky · The International Brain Laboratory The International Brain Laboratory · John Cunningham · Liam Paninski -
2020 Poster: Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations »
Joshua Glaser · Matthew Whiteway · John Cunningham · Liam Paninski · Scott Linderman -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Paraphrase Generation with Latent Bag of Words »
Yao Fu · Yansong Feng · John Cunningham -
2019 Poster: BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos »
Eleanor Batty · Matthew Whiteway · Shreya Saxena · Dan Biderman · Taiga Abe · Simon Musall · Winthrop Gillis · Jeffrey Markowitz · Anne Churchland · John Cunningham · Sandeep R Datta · Scott Linderman · Liam Paninski -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham -
2017 : 3 spotlight presentations »
Estefany Kelly Buchanan · Mathias Lechner · Kezhi Li -
2016 Poster: Linear dynamical neural population models through nonlinear embeddings »
Yuanjun Gao · Evan Archer · Liam Paninski · John Cunningham -
2016 Poster: Automated scalable segmentation of neurons from multispectral images »
Uygar Sümbül · Douglas Roossien · Dawen Cai · Fei Chen · Nicholas Barry · John Cunningham · Edward Boyden · Liam Paninski -
2015 Poster: Bayesian Active Model Selection with an Application to Automated Audiometry »
Jacob Gardner · Gustavo Malkomes · Roman Garnett · Kilian Weinberger · Dennis Barbour · John Cunningham -
2015 Poster: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham -
2015 Spotlight: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham