Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Privacy in Machine Learning (PriML) 2021

Differential Privacy via Group Shuffling

Amir Mohammad Abouei · Clement Canonne


Abstract:

The past decade has seen data privacy emerge as a fundamental and pressing issue. Among the tools developed to tackle it, differential privacy has emerged as a central and principled framework, with specific variants capturing various threat models. In particular, the recently proposed shuffle model of differential privacy allows for promising tradeoffs between accuracy and privacy. However, the shuffle model may not be suitable in all situations, as it relies on a distributed setting where all users can coordinate and trust (or simulate) a joint shuffling algorithm. To address this, we introduce a new model, the group shuffle model, in which users are partitioned into several groups, each group having its own local shuffler. We investigate the privacy/accuracy tradeoffs in our model, by comparing it to both the shuffle and local models of privacy, which it some sense interpolates between. In addition to general relations between group shuffle, shuffle, and local privacy, we provide a detailed comparison of the cost and benefit of the group shuffle model, by providing both upper and lower bounds for the specific task of binary summation.

Chat is not available.