Timezone: »
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP). Standard differential privacy (DP) gives us a worst-case bound that might be orders of magnitude larger than the privacy loss to a particular individual relative to a fixed dataset. The pDP framework provides a more fine-grained analysis of the privacy guarantee to a target individual, but the per-instance privacy loss itself might be a function of sensitive data. In this paper, we analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
Author Information
Rachel Redberg (UC Santa Barbara)
Yu-Xiang Wang (UC Santa Barbara)
More from the Same Authors
-
2021 Spotlight: Logarithmic Regret in Feature-based Dynamic Pricing »
Jianyu Xu · Yu-Xiang Wang -
2022 : Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy »
Rachel Redberg · Yuqing Zhu · Yu-Xiang Wang -
2023 Poster: Improving the Privacy and Practicality of Objective Perturbation for Differentially Private Linear Learners »
Rachel Redberg · Antti Koskela · Yu-Xiang Wang -
2022 Poster: Differentially Private Linear Sketches: Efficient Implementations and Applications »
Fuheng Zhao · Dan Qiao · Rachel Redberg · Divyakant Agrawal · Amr El Abbadi · Yu-Xiang Wang -
2021 Poster: Logarithmic Regret in Feature-based Dynamic Pricing »
Jianyu Xu · Yu-Xiang Wang -
2021 Poster: Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings »
Ming Yin · Yu-Xiang Wang -
2021 Poster: Towards Instance-Optimal Offline Reinforcement Learning with Pessimism »
Ming Yin · Yu-Xiang Wang -
2021 Poster: Near-Optimal Offline Reinforcement Learning via Double Variance Reduction »
Ming Yin · Yu Bai · Yu-Xiang Wang -
2017 Poster: Higher-Order Total Variation Classes on Grids: Minimax Theory and Trend Filtering Methods »
Veeranjaneyulu Sadhanala · Yu-Xiang Wang · James Sharpnack · Ryan Tibshirani -
2016 : Optimal and Adaptive Off-policy Evaluation in Contextual Bandits »
Yu-Xiang Wang -
2016 Poster: Total Variation Classes Beyond 1d: Minimax Rates, and the Limitations of Linear Smoothers »
Veeranjaneyulu Sadhanala · Yu-Xiang Wang · Ryan Tibshirani -
2015 : Yu-Xiang Wang: Learning with differential privacy: stability, learnability and the sufficiency and necessity of ERM principle »
Yu-Xiang Wang -
2015 Poster: Differentially private subspace clustering »
Yining Wang · Yu-Xiang Wang · Aarti Singh