`

Timezone: »

 
Feature-level privacy loss modelling in differentially private machine learning
Dmitrii Usynin · Alexander Ziller · Moritz Knolle · Daniel Rueckert · Georgios Kaissis
Event URL: https://openreview.net/forum?id=_b0jOHfZbfD »

We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML). Optimal noise calibration in this setting requires efficient Jacobian matrix computations and tight bounds on the L2-sensitivity. Our framework achieves these objectives by relying on a functional analysis-based method for sensitivity tracking, which we briefly outline. This approach interoperates naturally and seamlessly with static graph-based automatic differentiation, which enables order-of-magnitude improvements in compilation times compared to previous work. Moreover, we demonstrate that optimising the sensitivity of the entire computational graph at once yields substantially tighter estimates of the true sensitivity compared to interval bound propagation techniques. Our work naturally befits recent developments in DP such as individual privacy accounting, aiming to offer improved privacy-utility trade-offs, and represents a step towards the integration of accessible machine learning tooling with advanced privacy accounting systems.

Author Information

Dmitrii Usynin (TU Munich / Imperial College London)
Alexander Ziller (Technical University of Munich)

Alex is a Ph.D. student at the Institute for Radiology and the Institute of Data Science and Artificial Intelligence in Healthcare at the Technical University of Munich. Furthermore he's a Research Scientist at OpenMined. His research interests include developing novel AI methods in cancer diagnostics and survival prediction as well as secure and private AI in Healthcare.

Moritz Knolle (Imperial College London)
Daniel Rueckert (Imperial College London)
Georgios Kaissis (Technical University Munich)

More from the Same Authors