Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Distribution Shifts: Connecting Methods and Applications

Choosing Public Datasets for Private Machine Learning via Gradient Subspace Distance

Xin Gu · Gautam Kamath · Steven Wu


Abstract:

Differentially private stochastic gradient descent privatizes model training by injecting noise into each iteration, where the noise magnitude increases with the number of model parameters. Recent works suggest that we can reduce the noise by leveraging public data for private machine learning, by projecting gradients onto a subspace prescribed by the public data. However, given a choice of public datasets, it is not clear which one may be most appropriate for the private task. We give an algorithm for selecting a public dataset by measuring a low-dimensional subspace distance between gradients of the public and private examples. The computational and privacy cost overhead of our method is minimal. Empirical evaluation suggests that trained model accuracy is monotone in this distance.

Chat is not available.