Timezone: »
Federated Learning is a framework for training machine learning models from multiple local data sets without access to the data. A shared model is jointly learned through an interactive process between server and clients that combines locally learned model gradients or weights. However, the lack of data transparency naturally raises concerns about model security. Recently, several state-of-the-art backdoor attacks have been proposed, which achieve high attack success rates while simultaneously being difficult to detect, leading to compromised federated learning models. In this paper, motivated by differences in the output layer distribution between models trained with and without the presence of backdoor attacks, we propose a defense method that can prevent backdoor attacks from influencing the model while maintaining the accuracy of the original classification task.
Author Information
Joseph Lavond (University of North Carolina at Chapel Hill)
Minhao Cheng (Hong Kong University of Science and Technology)
Yao Li (University of North Carolina at Chapel Hill)

I am an assistant professor of Statistics at UNC Chapel Hill. I was a Ph.D. student at UC Davis working with Prof. Cho-Jui Hsieh and Prof. Thomas C.M. Lee. I received my master degree in London School of Economics and Political Science under supervision of Prof. Piotr Fryzlewicz. My research focuses on developing new algorithms to resolve the real-world difficulties in the machine learning pipeline. I study both statistical and computational aspects of machine learning models. I am interested in developing new models with statistical guarantees, such as recommeder systems, factorial machine and fiducial inference. Currently, I am working on adversarial examples, trying to improve the robustness of deep neural networks.
More from the Same Authors
-
2022 : FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning »
Yuanhao Xiong · Ruochen Wang · Minhao Cheng · Felix Yu · Cho-Jui Hsieh -
2022 : Defend Against Textual Backdoor Attacks By Token Substitution »
Xinglin Li · Yao Li · Minhao Cheng -
2022 : Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images »
Yi Cui · Yao Li · Jayson Miedema · Sherif Farag · J. S. Marron · Nancy Thomas -
2022 : Grade-Adjusted Image Analysis Of Breast Cancer To Predict Subtype »
Dong Neuck Lee · J. S. Marron · Yao Li -
2022 : Identification of the Adversary from a Single Adversarial Example »
Minhao Cheng · Rui Min -
2023 Poster: Towards Stable Backdoor Purification through Feature Shift Tuning »
Rui Min · Zeyu Qin · Li Shen · Minhao Cheng -
2022 Poster: Efficient Non-Parametric Optimizer Search for Diverse Tasks »
Ruochen Wang · Yuanhao Xiong · Minhao Cheng · Cho-Jui Hsieh -
2022 Poster: Random Sharpness-Aware Minimization »
Yong Liu · Siqi Mai · Minhao Cheng · Xiangning Chen · Cho-Jui Hsieh · Yang You