Poster
in
Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership
Advanced Free-rider Attacks in Federated Learning
Zhenqian Zhu · Jiangang Shu · Xiaohua Jia
Federated learning is a new machine learning technology that multiple clients collaboratively to train a global model without sharing their local data. Due to the fact that clients have the direct control over their local models and training data, federated learning is inherently vulnerable to free-rider attacks that the malicious client forges local model parameters to get reward without contributing sufficient local data and computation resources. Recently, many different free-rider attacks have been proposed. However, existing attacks haven’t a good stealth property. The convergence property represents the convergent speed and final global model accuracy. The stealth property indicates the attacker’s ability to hide its local update. In this work, we first utilize the Ornstein-Uhlenbeck (OU) process to formalize the evolution of local and global training processes, and analyze the geometrical relationship of all clients’ local model updates. Then, we propose a scaled delta attack and an advanced free-rider attack. We also prove that advanced free-rider attack can not only ensure the convergence of the aggregated model, but also hold the stealth property. Expriment results demonstrate that our advanced free-rider attack is feasible and can escape from state-of-the-art defense mechanisms. Our results show that even a highly constrained adversary can carry out the advanced free-rider attack while simultaneously maintaining stealth under the defense strategies, which highlights the vulnerability of the federated learning setting and the need to develop effective defense strategies.