Skip to yearly menu bar Skip to main content


Contributed talk
in
Workshop: Privacy in Machine Learning (PriML) 2021

Privacy-Aware Rejection Sampling

Jordan Awan · Vinayak Rao


Abstract: Differential privacy (DP) offers strong protection against adversaries with arbitrary side-information and computational power. However, many implementations of DP mechanisms leave themselves vulnerable to side channel attacks, such as timing attacks. As many privacy mechanisms, such as the exponential mechanism, do not lend themselves to easy implementations, when sampling methods such as MCMC or rejection sampling are used, the runtime can leak privacy. In this work, we quantify the privacy cost due to the runtime of a rejection sampler in terms of $(\epsilon,\delta)$-DP. We also propose three modifications to the rejection sampling algorithm, to protect against timing attacks by making the runtime independent of the data. We also use our techniques to develop an adaptive rejection sampler for log-Holder densities, which also has data-independent runtime.

Chat is not available.