Timezone: »
Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood. However, data of interest are typically assumed to live in some (often unknown) low-dimensional manifold embedded in a high-dimensional ambient space. The result is a modelling mismatch since -- by construction -- the invertibility requirement implies high-dimensional support of the learned distribution. Injective flows, mappings from low- to high-dimensional spaces, aim to fix this discrepancy by learning distributions on manifolds, but the resulting volume-change term becomes more challenging to evaluate. Current approaches either avoid computing this term entirely using various heuristics, or assume the manifold is known beforehand and therefore are not widely applicable. Instead, we propose two methods to tractably calculate the gradient of this term with respect to the parameters of the model, relying on careful use of automatic differentiation and techniques from numerical linear algebra. Both approaches perform end-to-end nonlinear manifold learning and density estimation for data projected onto this manifold. We study the trade-offs between our proposed methods, empirically verify that we outperform approaches ignoring the volume-change term by more accurately learning manifolds and the corresponding distributions on them, and show promising results on out-of-distribution detection. Our code is available at https://github.com/layer6ai-labs/rectangular-flows.
Author Information
Anthony Caterini (Layer 6 AI)
Gabriel Loaiza-Ganem (Layer 6 AI)
Geoff Pleiss (Columbia University)
John Cunningham (Columbia University)
More from the Same Authors
-
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds »
Jesse Cresswell · Brendan Ross · Gabriel Loaiza-Ganem · Humberto Reyes-Gonzalez · Marco Letizia · Anthony Caterini -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : The Union of Manifolds Hypothesis »
Bradley Brown · Anthony Caterini · Brendan Ross · Jesse Cresswell · Gabriel Loaiza-Ganem -
2022 : The Best Deep Ensembles Sacrifice Predictive Diversity »
Taiga Abe · Estefany Kelly Buchanan · Geoff Pleiss · John Cunningham -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2022 : Spotlight 5 - Gabriel Loaiza-Ganem: Denoising Deep Generative Models »
Gabriel Loaiza-Ganem -
2022 Poster: Data Augmentation for Compositional Data: Advancing Predictive Models of the Microbiome »
Elliott Gordon-Rodriguez · Thomas Quinn · John Cunningham -
2022 Poster: Posterior and Computational Uncertainty in Gaussian Processes »
Jonathan Wenger · Geoff Pleiss · Marvin Pförtner · Philipp Hennig · John Cunningham -
2022 Poster: Deep Ensembles Work, But Are They Necessary? »
Taiga Abe · Estefany Kelly Buchanan · Geoff Pleiss · Richard Zemel · John Cunningham -
2021 : Spotlight Talk 9 »
Anthony Caterini -
2021 Poster: The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective »
Geoff Pleiss · John Cunningham -
2021 Poster: Posterior Collapse and Latent Variable Non-identifiability »
Yixin Wang · David Blei · John Cunningham -
2020 Poster: Fast Matrix Square Roots with Applications to Gaussian Processes and Bayesian Optimization »
Geoff Pleiss · Martin Jankowiak · David Eriksson · Anil Damle · Jacob Gardner -
2020 Poster: Deep Graph Pose: a semi-supervised deep graphical model for improved animal pose tracking »
Anqi Wu · Estefany Kelly Buchanan · Matthew Whiteway · Michael Schartner · Guido Meijer · Jean-Paul Noel · Erica Rodriguez · Claire Everett · Amy Norovich · Evan Schaffer · Neeli Mishra · C. Daniel Salzman · Dora Angelaki · Andrés Bendesky · The International Brain Laboratory The International Brain Laboratory · John Cunningham · Liam Paninski -
2020 Poster: Recurrent Switching Dynamical Systems Models for Multiple Interacting Neural Populations »
Joshua Glaser · Matthew Whiteway · John Cunningham · Liam Paninski · Scott Linderman -
2020 Poster: Identifying Mislabeled Data using the Area Under the Margin Ranking »
Geoff Pleiss · Tianyi Zhang · Ethan Elenberg · Kilian Weinberger -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Paraphrase Generation with Latent Bag of Words »
Yao Fu · Yansong Feng · John Cunningham -
2019 Poster: BehaveNet: nonlinear embedding and Bayesian neural decoding of behavioral videos »
Eleanor Batty · Matthew Whiteway · Shreya Saxena · Dan Biderman · Taiga Abe · Simon Musall · Winthrop Gillis · Jeffrey Markowitz · Anne Churchland · John Cunningham · Sandeep R Datta · Scott Linderman · Liam Paninski -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: Exact Gaussian Processes on a Million Data Points »
Ke Alexander Wang · Geoff Pleiss · Jacob Gardner · Stephen Tyree · Kilian Weinberger · Andrew Gordon Wilson -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham -
2018 Poster: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2018 Spotlight: GPyTorch: Blackbox Matrix-Matrix Gaussian Process Inference with GPU Acceleration »
Jacob Gardner · Geoff Pleiss · Kilian Weinberger · David Bindel · Andrew Wilson -
2018 Poster: Hamiltonian Variational Auto-Encoder »
Anthony Caterini · Arnaud Doucet · Dino Sejdinovic -
2017 Poster: On Fairness and Calibration »
Geoff Pleiss · Manish Raghavan · Felix Wu · Jon Kleinberg · Kilian Weinberger -
2016 Poster: Linear dynamical neural population models through nonlinear embeddings »
Yuanjun Gao · Evan Archer · Liam Paninski · John Cunningham -
2016 Poster: Automated scalable segmentation of neurons from multispectral images »
Uygar Sümbül · Douglas Roossien · Dawen Cai · Fei Chen · Nicholas Barry · John Cunningham · Edward Boyden · Liam Paninski -
2015 Poster: Bayesian Active Model Selection with an Application to Automated Audiometry »
Jacob Gardner · Gustavo Malkomes · Roman Garnett · Kilian Weinberger · Dennis Barbour · John Cunningham -
2015 Poster: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham -
2015 Spotlight: High-dimensional neural spike train analysis with generalized count linear dynamical systems »
Yuanjun Gao · Lars Busing · Krishna V Shenoy · John Cunningham