Skip to yearly menu bar Skip to main content


Learning to Attack Federated Learning: A Model-based Reinforcement Learning Attack Framework

Henger Li · Xiaolin Sun · Zizhan Zheng

Hall J (level 1) #903

Keywords: [ Adversarial Attacks ] [ Reinforcement Learning ] [ federated learning ]


We propose a model-based reinforcement learning framework to derive untargeted poisoning attacks against federated learning (FL) systems. Our framework first approximates the distribution of the clients' aggregated data using model updates from the server. The learned distribution is then used to build a simulator of the FL environment, which is utilized to learn an adaptive attack policy through reinforcement learning. Our framework is capable of learning strong attacks automatically even when the server adopts a robust aggregation rule. We further derive an upper bound on the attacker's performance loss due to inaccurate distribution estimation. Experimental results on real-world datasets demonstrate that the proposed attack framework significantly outperforms state-of-the-art poisoning attacks. This indicates the importance of developing adaptive defenses for FL systems.

Chat is not available.