Timezone: »

 
Poster
Improved Differential Privacy for SGD via Optimal Private Linear Operators on Adaptive Streams
Sergey Denisov · H. Brendan McMahan · John Rush · Adam Smith · Abhradeep Guha Thakurta

Thu Dec 01 09:00 AM -- 11:00 AM (PST) @ Hall J #720

Motivated by recent applications requiring differential privacy in the setting of adaptive streams, we investigate the question of optimal instantiations of the matrix mechanism in this setting. We prove fundamental theoretical results on the applicability of matrix factorizations to the adaptive streaming setting, and provide a new parameter-free fixed-point algorithm for computing optimal factorizations. We instantiate this framework with respect to concrete matrices which arise naturally in the machine learning setting, and train user-level differentially private models with the resulting optimal mechanisms, yielding significant improvements on a notable problem in federated learning with user-level differential privacy.

Author Information

Sergey Denisov (University of Wisconsin-Madison)
H. Brendan McMahan (Google, Inc.)
John Rush (Google)

I come from a pure mathematics background, formerly a harmonic analyst and mathematical physicist. I transferred to machine learning on the software side after grad school, and joined Google in 2018, working on federated learning. I am a main author of TensorFlow Federated; ask me about it!

Adam Smith (Boston University)
Abhradeep Guha Thakurta (Google Research - Brain Team)

More from the Same Authors