Timezone: »

Breaking the Communication-Privacy-Accuracy Trilemma
Wei-Ning Chen · Peter Kairouz · Ayfer Ozgur

Tue Dec 08 09:00 PM -- 11:00 PM (PST) @ Poster Session 2 #649

Two major challenges in distributed learning and estimation are 1) preserving the privacy of the local samples; and 2) communicating them efficiently to a central server, while achieving high accuracy for the end-to-end task. While there has been significant interest in addressing each of these challenges separately in the recent literature, treatments that simultaneously address both challenges are still largely missing. In this paper, we develop novel encoding and decoding mechanisms that simultaneously achieve optimal privacy and communication efficiency in various canonical settings.

In particular, we consider the problems of mean estimation and frequency estimation under epsilon-local differential privacy and b-bit communication constraints. For mean estimation, we propose a scheme based on Kashin’s representation and random sampling, with order-optimal estimation error under both constraints. For frequency estimation, we present a mechanism that leverages the recursive structure of Walsh-Hadamard matrices and achieves order-optimal estimation error for all privacy levels and communication budgets. As a by-product, we also construct a distribution estimation mechanism that is rate-optimal for all privacy regimes and communication constraints, extending recent work that is limited to b = 1 and epsilon = O(1). Our results demonstrate that intelligent encoding under joint privacy and communication constraints can yield a performance that matches the optimal accuracy achievable under either constraint alone.

Author Information

Wei-Ning Chen (Stanford University)
Peter Kairouz (Google)

Peter Kairouz is a Google Research Scientist working on decentralized, privacy-preserving, and robust machine learning algorithms. Prior to Google, his research largely focused on building decentralized technologies for anonymous broadcasting over complex networks, understanding the fundamental trade-off between differential privacy and utility of learning algorithms, and leveraging state-of-the-art deep generative models for data-driven privacy and fairness.

Ayfer Ozgur (Stanford University)

More from the Same Authors