Poster
Differentially Private Generalized Linear Models Revisited
Raman Arora · Raef Bassily · Cristóbal Guzmán · Michael Menart · Enayat Ullah
Hall J (level 1) #818
Keywords: [ supervised learning ] [ generalized linear model ] [ Optimization ] [ differential privacy ]
Abstract:
We study the problem of -differentially private learning of linear predictors with convex losses. We provide results for two subclasses of loss functions. The first case is when the loss is smooth and non-negative but not necessarily Lipschitz (such as the squared loss). For this case, we establish an upper bound on the excess population risk of , where is the number of samples, is the dimension of the problem, and is the minimizer of the population risk. Apart from the dependence on , our bound is essentially tight in all parameters. In particular, we show a lower bound of . We also revisit the previously studied case of Lipschitz losses \cite{SSTT21}. For this case, we close the gap in the existing work and show that the optimal rate is (up to log factors) , where is the rank of the design matrix. This improves over existing work in the high privacy regime. Finally, our algorithms involve a private model selection approach that we develop to enable attaining the stated rates without a-priori knowledge of .
Chat is not available.