Timezone: »
Precision measurements and new physics searches at the Large Hadron Collider require efficient simulations of particle propagation and interactions within the detectors. The most computationally expensive simulations involve calorimeter showers. Advances in deep generative modelling -- particularly in the realm of high-dimensional data -- have opened the possibility of generating realistic calorimeter showers orders of magnitude more quickly than physics-based simulation. However, the high-dimensional representation of showers belies the relative simplicity and structure of the underlying physical laws. This phenomenon is yet another example of the manifold hypothesis from machine learning, which states that high-dimensional data is supported on low-dimensional manifolds. We thus propose modelling calorimeter showers first by learning their manifold structure, and then estimating the density of data across this manifold. Learning manifold structure reduces the dimensionality of the data, which enables fast training and generation when compared with competing methods.
Author Information
Jesse Cresswell (Layer 6 AI)
Brendan Ross (Layer 6 AI)
Gabriel Loaiza-Ganem (Layer 6 AI)
Humberto Reyes-Gonzalez (University of Genoa)
Marco Letizia (University of Genova)
Anthony Caterini (Layer 6 AI / University of Oxford)
More from the Same Authors
-
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2021 : Efficient kernel methods for model-independent new physics searches »
Marco Letizia · Lorenzo Rosasco · Marco Rando -
2021 : Entropic Issues in Likelihood-Based OOD Detection »
Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : How good is the Standard Model? Machine learning multivariate Goodness of Fit tests »
Gaia Grosso · Marco Letizia · Andrea Wulzer · Maurizio Pierini -
2022 : A fast and flexible machine learning approach to data quality monitoring »
Marco Letizia · Gaia Grosso · Andrea Wulzer · Marco Zanetti · Jacopo Pazzini · Marco Rando · Nicolò Lai -
2022 : Relating Regularization and Generalization through the Intrinsic Dimension of Activations »
Bradley Brown · Jordan Juravsky · Anthony Caterini · Gabriel Loaiza-Ganem -
2022 : Find Your Friends: Personalized Federated Learning with the Right Collaborators »
Yi Sui · Junfeng Wen · Yenson Lau · Brendan Ross · Jesse Cresswell -
2022 : The Union of Manifolds Hypothesis »
Bradley Brown · Anthony Caterini · Brendan Ross · Jesse Cresswell · Gabriel Loaiza-Ganem -
2022 : Denoising Deep Generative Models »
Gabriel Loaiza-Ganem · Brendan Ross · Luhuan Wu · John Cunningham · Jesse Cresswell · Anthony Caterini -
2023 Poster: Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models »
George Stein · Jesse Cresswell · Rasa Hosseinzadeh · Yi Sui · Brendan Ross · Valentin Villecroze · Zhaoyan Liu · Anthony Caterini · Eric Taylor · Gabriel Loaiza-Ganem -
2022 : Disparate Impact in Differential Privacy from Gradient Misalignment »
Maria Esipova · Atiyeh Ashari · Yaqiao Luo · Jesse Cresswell -
2022 : Spotlight 5 - Gabriel Loaiza-Ganem: Denoising Deep Generative Models »
Gabriel Loaiza-Ganem -
2021 Poster: Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows »
Brendan Ross · Jesse Cresswell -
2021 Poster: Rectangular Flows for Manifold Learning »
Anthony Caterini · Gabriel Loaiza-Ganem · Geoff Pleiss · John Cunningham -
2020 Poster: Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax »
Andres Potapczynski · Gabriel Loaiza-Ganem · John Cunningham -
2019 Poster: Deep Random Splines for Point Process Intensity Estimation of Neural Population Data »
Gabriel Loaiza-Ganem · Sean Perkins · Karen Schroeder · Mark Churchland · John Cunningham -
2019 Poster: The continuous Bernoulli: fixing a pervasive error in variational autoencoders »
Gabriel Loaiza-Ganem · John Cunningham