Timezone: »
Federated learning (FL) is a popular distributed learning framework that trains a global model through iterative communications between a central server and edge devices. Recent works have demonstrated that FL is vulnerable to model poisoning attacks. Several server-based defense approaches (e.g. robust aggregation), have been proposed to mitigate such attacks. However, we empirically show that under extremely strong attacks, these defensive methods fail to guarantee the robustness of FL. More importantly, we observe that as long as the global model is polluted, the impact of attacks on the global model will remain in subsequent rounds even if there are no subsequent attacks. In this work, we propose a client-based defense, named White Blood Cell for Federated Learning (FL-WBC), which can mitigate model poisoning attacks that have already polluted the global model. The key idea of FL-WBC is to identify the parameter space where long-lasting attack effect on parameters resides and perturb that space during local training. Furthermore, we derive a certified robustness guarantee against model poisoning attacks and a convergence guarantee to FedAvg after applying our FL-WBC. We conduct experiments on FasionMNIST and CIFAR10 to evaluate the defense against state-of-the-art model poisoning attacks. The results demonstrate that our method can effectively mitigate model poisoning attack impact on the global model within 5 communication rounds with nearly no accuracy drop under both IID and Non-IID settings. Our defense is also complementary to existing server-based robust aggregation approaches and can further improve the robustness of FL under extremely strong attacks.
Author Information
Jingwei Sun (Duke University)
Ang Li (Duke University)
Louis DiValentin (Accenture)
Amin Hassanzadeh (Accenture)
Yiran Chen (Duke University)
Hai Li (Duke University)
More from the Same Authors
-
2022 : Fine-grain Inference on Out-of-Distribution Data with Hierarchical Classification »
Randolph Linderman · Jingyang Zhang · Nathan Inkawhich · Hai Li · Yiran Chen -
2022 Poster: Why do We Need Large Batchsizes in Contrastive Learning? A Gradient-Bias Perspective »
Changyou Chen · Jianyi Zhang · Yi Xu · Liqun Chen · Jiali Duan · Yiran Chen · Son Tran · Belinda Zeng · Trishul Chilimbi -
2020 Poster: DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles »
Huanrui Yang · Jingyang Zhang · Hongliang Dong · Nathan Inkawhich · Andrew Gardner · Andrew Touchet · Wesley Wilkes · Heath Berry · Hai Li -
2020 Poster: Perturbing Across the Feature Hierarchy to Improve Standard and Strict Blackbox Attack Transferability »
Nathan Inkawhich · Kevin J Liang · Binghui Wang · Matthew Inkawhich · Lawrence Carin · Yiran Chen -
2020 Oral: DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles »
Huanrui Yang · Jingyang Zhang · Hongliang Dong · Nathan Inkawhich · Andrew Gardner · Andrew Touchet · Wesley Wilkes · Heath Berry · Hai Li -
2019 Poster: Defending Neural Backdoors via Generative Distribution Modeling »
Ximing Qiao · Yukun Yang · Hai Li -
2018 Poster: Generalized Inverse Optimization through Online Learning »
Chaosheng Dong · Yiran Chen · Bo Zeng -
2017 Poster: TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning »
Wei Wen · Cong Xu · Feng Yan · Chunpeng Wu · Yandan Wang · Yiran Chen · Hai Li -
2017 Oral: TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning »
Wei Wen · Cong Xu · Feng Yan · Chunpeng Wu · Yandan Wang · Yiran Chen · Hai Li