Timezone: »
Deep generative models trained by maximum likelihood remain very popular methods for reasoning about data probabilistically. However, it has been observed that they can assign higher likelihoods to out-of-distribution (OOD) data than in-distribution data, thus calling into question the meaning of these likelihood values. In this work we provide a novel perspective on this phenomenon, decomposing the average likelihood into a KL divergence term and an entropy term. We argue that the latter can explain the curious OOD behaviour mentioned above, suppressing likelihood values on datasets with higher entropy. Although our idea is simple, we have not seen it explored yet in the literature. This analysis provides further explanation for the success of OOD detection methods based on likelihood ratios, as the problematic entropy term cancels out in expectation. Finally, we discuss how this observation relates to recent success in OOD detection with manifold-supported models, for which the above decomposition does not hold.
Author Information
Anthony Caterini (Layer 6 AI)
Gabriel Loaiza-Ganem (Layer 6 AI)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Dates n/a. Room
More from the Same Authors
-
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds »
Jesse Cresswell · Brendan Ross · Gabriel Loaiza-Ganem · Humberto Reyes-Gonzalez · Marco Letizia · Anthony Caterini -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : The Union of Manifolds Hypothesis »
Bradley Brown · Anthony Caterini · Brendan Ross · Jesse Cresswell · Gabriel Loaiza-Ganem -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2023 Poster: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models »
George Stein · Jesse Cresswell · Rasa Hosseinzadeh · Yi Sui · Brendan Ross · Valentin Villecroze · Zhaoyan Liu · Anthony Caterini · Eric Taylor · Gabriel Loaiza-Ganem -
2022 : Spotlight 5 - Gabriel Loaiza-Ganem: Denoising Deep Generative Models »
Gabriel Loaiza-Ganem -
2021 : Spotlight Talk 9 »
Anthony Caterini -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham -
2018 Poster: Hamiltonian Variational Auto-Encoder »
Anthony Caterini · Arnaud Doucet · Dino Sejdinovic