Timezone: »
Empirical studies have recently established that training differentially private models (with DP-SGD) results in disparities between classes. These works follow methodology from \emph{public} models in computing per-class accuracy and then comparing the worst-off class accuracy with other groups or with the overall accuracy. However, DP-SGD adds additional noise during model training and results in models that vary in prediction output across epochs and runs. Thus, it is largely unclear how to measure disparities in private models in the presence of noise; particularly when classes are not independent. In this work, we run extensive experiments by training state-of-the-art private models with various levels of privacy and find that DP training tends to over- or under-predict specific classes, leading to large variations in disparities between classes.
Author Information
Judy Hanwen Shen (Stanford)
Soham De (DeepMind)
Sam Smith (DeepMind)
Jamie Hayes (DeepMind)
Leonard Berrada (DeepMind)
David Stutz (DeepMind)
Research scientist at DeepMind, davidstutz.de
Borja De Balle Pigem (DeepMind)
More from the Same Authors
-
2021 Poster: Fast and Memory Efficient Differentially Private-SGD via JL Projections »
Zhiqi Bu · Sivakanth Gopi · Janardhan Kulkarni · Yin Tat Lee · Judy Hanwen Shen · Uthaipon Tantipongpipat -
2020 Poster: Batch Normalization Biases Residual Blocks Towards the Identity Function in Deep Networks »
Soham De · Sam Smith -
2020 Affinity Workshop: Women in Machine Learning »
Xinyi Chen · Erin Grant · Kristy Choi · Krystal Maughan · Xenia Miscouridou · Judy Hanwen Shen · Raquel Aoki · Belén Saldías · Mel Woghiren · Elizabeth Wood -
2019 Poster: Adversarial Robustness through Local Linearization »
Chongli Qin · James Martens · Sven Gowal · Dilip Krishnan · Krishnamurthy Dvijotham · Alhussein Fawzi · Soham De · Robert Stanforth · Pushmeet Kohli -
2017 : Don't Decay the Learning Rate, Increase the Batch Size »
Sam Smith