Poster

Differentially Private Online-to-batch for Smooth Losses

Qinzi Zhang · Hoang Tran · Ashok Cutkosky

Hall J #1034

Keywords: [ Online Learning ] [ Adaptive ] [ differential privacy ] [ Convex Optimization ] [ online-to-batch ]

[ Abstract ]
[ Paper [ OpenReview
Tue 29 Nov 2 p.m. PST — 4 p.m. PST

Abstract: We develop a new reduction that converts any online convex optimization algorithm suffering $O(\sqrt{T})$ regret into an $\epsilon$-differentially private stochastic convex optimization algorithm with the optimal convergence rate $\tilde O(1/\sqrt{T} + 1/\epsilon T)$ on smooth losses in linear time, forming a direct analogy to the classical non-private ``online-to-batch'' conversion. By applying our techniques to more advanced adaptive online algorithms, we produce adaptive differentially private counterparts whose convergence rates depend on apriori unknown variances or parameter norms.

Chat is not available.