Skip to yearly menu bar Skip to main content


Poster

Differentially Private Optimization with Sparse Gradients

Badih Ghazi · Cristóbal Guzmán · Pritish Kamath · Ravi Kumar · Pasin Manurangsi


Abstract:

Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of {\em individual} gradients. We start with new near-optimal bounds for the classic mean estimation problem but with sparse data, improving upon existing algorithms particularly for the high-dimensional regime. The corresponding lower bounds are based on a novel block-diagonal construction used in combination with existing DP mean estimation lower bounds. Next, we obtain pure- and approximate-DP algorithms with almost optimal rates for stochastic convex optimization with sparse gradients; the former represents the first nearly dimension-independent rates for this problem. Furthermore, by introducing novel analyses of bias-reduction in mean estimation and randomly-stopped biased SGD we obtain nearly dimension independent rates for near stationary points for the empirical risk in nonconvex settings under approximate-DP.

Live content is unavailable. Log in and register to view live content