Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership

Detecting Poisoning Nodes in Federated Learning by Ranking Gradients

Wanchuang Zhu · Benjamin Zhao · Simon Luo · Ke Deng


Abstract:

We propose a simple, yet effective defense against poisoning attacks in Federated Learning. Our approach transforms the update gradients from local nodes into a matrix containing the rankings of local nodes across all model parameter dimensions. We then distinguish the malicious nodes from the benign nodes with key characteristics of the rank domain, specifically, the mean and standard deviation of a node's parameter rankings. Under mild conditions, we prove that our approach is guaranteed to detect all malicious nodes under typical Byzantine poisoning attack settings with no prior knowledge or history about the participating nodes. The effectiveness of our proposed approach is further confirmed by experiments on two classic datasets. Compared to the state-of-art methods in the literature for defending Byzantine attacks, our approach is unique in its way of identifying the malicious nodes by ranking and its robustness to effectively defense a wide range of attacks.

Chat is not available.