Timezone: »
The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric for assessing the robustness of the model. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training which ensures every layer is 1-Lipschitz. A simple residual connection can then be used to make the model monotonic in any subset of its inputs, which is useful in scenarios where domain knowledge dictates such dependence. Examples can be found in algorithmic fairness requirements or, as presented here, in the classification of particle decays. Our normalization is minimally constraining and allows the underlying architecture to maintain higher expressiveness compared to other techniques which aim to either control the Lipschitz constant of the model or ensure its monotonicity. We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy-flavor decays in the LHCb Run 3 trigger system.
Author Information
Niklas S Nolte (MIT)
Ouail Kitouni (MIT)
Mike Williams (MIT)
More from the Same Authors
-
2022 : Finding NEEMo: Geometric Fitting using Neural Estimation of the Energy Mover’s Distance »
Ouail Kitouni · Mike Williams · Niklas S Nolte -
2022 Poster: Towards Understanding Grokking: An Effective Theory of Representation Learning »
Ziming Liu · Ouail Kitouni · Niklas S Nolte · Eric Michaud · Max Tegmark · Mike Williams -
2015 : Machine Learning in HEP »
Mike Williams