Timezone: »

Yuqing Zhu · Xiang Yu · Yi-Hsuan Tsai · Francesco Pittaluga · Masoud Faraki · Manmohan Chandraker · Yu-Xiang Wang
Event URL: https://openreview.net/forum?id=-0F7dFHNPtr »
Differentially Private Federated Learning (DPFL) is an emerging field with many applications. Gradient averaging based DPFL methods require costly communication rounds and hardly work with large-capacity models, due to the explicit dimension dependence in its added noise. In this paper, inspired by the non-federated knowledge transfer privacy learning methods, we design two DPFL algorithms (AE-DPFL and kNN-DPFL) that provide provable DP guarantees for both instance-level and agent-level privacy regimes. By voting among the data labels returned from each local model, instead of averaging the gradients, our algorithms avoid the dimension dependence and significantly reduces the communication cost. Theoretically, by applying secure multi-party computation, we could exponentially amplify the (data-dependent) privacy guarantees when the margin of the voting scores are distinctive. Empirical evaluation on both instance and agent level DP is conducted across five datasets, showing 2% to 12% higher accuracy when privacy cost is the same compared to DP-FedAvg, or less than $65\%$ privacy cost when accuracy aligns the same.

Author Information

Yuqing Zhu (University of California Santa Barbara)
Xiang Yu (NEC Laboratories America)

I am a researcher at NEC Laboratories America. I am mainly interested in computer vision and machine learning. My current research focuses on object and face recognition, generative models for data synthesis, feature correspondence and landmark localization, and metric learning in disentangling factors of variations for recognition.

Yi-Hsuan Tsai (NEC Labs America)
Francesco Pittaluga (NEC Labs America)
Masoud Faraki (NEC-Labs)
Manmohan Chandraker (UC San Diego)
Yu-Xiang Wang (UC Santa Barbara)

More from the Same Authors