Skip to yearly menu bar Skip to main content

Workshop: Algorithmic Fairness through the Lens of Causality and Privacy

A Closer Look at the Calibration of Differential Private Learners

Hanlin Zhang · Xuechen (Chen) Li · Prithviraj Sen · Salim Roukos · Tatsunori Hashimoto


We systematically study the calibration of classifiers trained with differentially private stochastic gradient descent (DP-SGD) and observe miscalibration across a wide range of vision and language tasks. Our analysis identifies per-example gradient clipping in DP-SGD as a major cause of miscalibration, and we show that existing baselines for improving private calibration only provide small improvements in calibration error while occasionally causing large degradation in accuracy. As a solution, we show that differentially private variants of post-processing calibration methods such as temperature calibration and Platt scaling are surprisingly effective and have negligible utility cost to the overall model. Across 7 tasks, temperature calibration and Platt scaling with DP-SGD result in an average 55-fold reduction in the expected calibration error and only incurs an up to 1.59 percent drop in accuracy.

Chat is not available.