Skip to yearly menu bar Skip to main content


Poster

Auditing Differentially Private Machine Learning: How Private is Private SGD?

Matthew Jagielski · Jonathan Ullman · Alina Oprea

Poster Session 1 #255

Abstract:

We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy.

Chat is not available.