Timezone: »
The manifold hypothesis states that low-dimensional manifold structure exists in high-dimensional data, which is strongly supported by the success of deep learning in processing such data. However, we argue here that the manifold hypothesis is incomplete, as it does not allow any variation in the intrinsic dimensionality of different sub-regions of the data space. We thus posit the union of manifold hypothesis, which states that high-dimensional data of interest comes from a union of disjoint manifolds; this allows intrinsic dimensionality to vary. We empirically verify this hypothesis on image datasets using a standard estimator of intrinsic dimensionality, and also demonstrate an improvement in classification performance derived from this hypothesis. We hope our work will encourage the community to further explore the benefits of considering the union of manifolds structure in data.
Author Information
Bradley Brown (University of Waterloo)
Anthony Caterini (Layer 6 AI / University of Oxford)
Brendan Ross (Layer 6 AI)
Jesse Cresswell (Layer 6 AI)
Gabriel Loaiza-Ganem (Layer 6 AI)
More from the Same Authors
-
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds »
Jesse Cresswell · Brendan Ross · Gabriel Loaiza-Ganem · Humberto Reyes-Gonzalez · Marco Letizia · Anthony Caterini -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : Find Your Friends: Personalized Federated Learning with the Right Collaborators »
Yi Sui · Junfeng Wen · Yenson Lau · Brendan Ross · Jesse Cresswell -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2022 : Disparate Impact in Differential Privacy from Gradient Misalignment »
Maria Esipova · Atiyeh Ashari · Yaqiao Luo · Jesse Cresswell -
2022 : Poster Session 1 »
Andrew Lowy · Thomas Bonnier · Yiling Xie · Guy Kornowski · Simon Schug · Seungyub Han · Nicolas Loizou · xinwei zhang · Laurent Condat · Tabea E. Röber · Si Yi Meng · Marco Mondelli · Runlong Zhou · Eshaan Nichani · Adrian Goldwaser · Rudrajit Das · Kayhan Behdin · Atish Agarwala · Mukul Gagrani · Gary Cheng · Tian Li · Haoran Sun · Hossein Taheri · Allen Liu · Siqi Zhang · Dmitrii Avdiukhin · Bradley Brown · Miaolan Xie · Junhyung Lyle Kim · Sharan Vaswani · Xinmeng Huang · Ganesh Ramachandra Kini · Angela Yuan · Weiqiang Zheng · Jiajin Li -
2022 : Spotlight 5 - Gabriel Loaiza-Ganem: Denoising Deep Generative Models »
Gabriel Loaiza-Ganem -
2021 Poster: Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows »
Brendan Ross · Jesse Cresswell -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham