Skip to yearly menu bar Skip to main content


Modeling the Machine Learning Multiverse

Samuel J. Bell · Onno Kampman · Jesse Dodge · Neil Lawrence

Hall J (level 1) #119

Keywords: [ transparency ] [ replication ] [ multiverse analysis ] [ generalization gap ] [ adaptive optimizers ] [ batch size ] [ reproducibility ]


Amid mounting concern about the reliability and credibility of machine learning research, we present a principled framework for making robust and generalizable claims: the multiverse analysis. Our framework builds upon the multiverse analysis introduced in response to psychology's own reproducibility crisis. To efficiently explore high-dimensional and often continuous ML search spaces, we model the multiverse with a Gaussian Process surrogate and apply Bayesian experimental design. Our framework is designed to facilitate drawing robust scientific conclusions about model performance, and thus our approach focuses on exploration rather than conventional optimization. In the first of two case studies, we investigate disputed claims about the relative merit of adaptive optimizers. Second, we synthesize conflicting research on the effect of learning rate on the large batch training generalization gap. For the machine learning community, the multiverse analysis is a simple and effective technique for identifying robust claims, for increasing transparency, and a step toward improved reproducibility.

Chat is not available.