Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints
File version
Accepted Manuscript (AM)
Author(s)
Wan, W
Lu, J
Hu, S
Shi, J
Zhang, LY
Zhou, M
Zheng, Y
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Guangzhou, China
License
Abstract
Federated learning is a newly emerging distributed learning framework that facilitates the collaborative training of a shared global model among distributed participants with their privacy preserved. However, federated learning systems are vulnerable to Byzantine attacks from malicious participants, who can upload carefully crafted local model updates to degrade the quality of the global model and even leave a backdoor. While this problem has received significant attention recently, current defensive schemes heavily rely on various assumptions, such as a fixed Byzantine model, availability of participants' local data, minority attackers, IID data distribution, etc. To relax those constraints, this paper presents Robust-FL, the first prediction-based Byzantine-robust federated learning scheme where none of the assumptions is leveraged. The core idea of the Robust-FL is exploiting historical global model to construct an estimator based on which the local models will be filtered through similarity detection. We then cluster local models to adaptively adjust the acceptable differences between the local models and the estimator such that Byzantine users can be identified. Extensive experiments over different datasets show that our approach achieves the following advantages simultaneously: (i) independence of participants' local data, (ii) tolerance of majority attackers, (iii) generalization to variable Byzantine model.
Journal Title
Conference Title
2022 18th International Conference on Mobility, Sensing and Networking (MSN)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Item Access Status
Note
Access the data
Related item(s)
Subject
Data and information privacy
Electronics, sensors and digital hardware not elsewhere classified
Persistent link to this record
Citation
Li, M; Wan, W; Lu, J; Hu, S; Shi, J; Zhang, LY; Zhou, M; Zheng, Y, Shielding Federated Learning: Mitigating Byzantine Attacks with Less Constraints, 2022 18th International Conference on Mobility, Sensing and Networking (MSN), 2022, pp. 178-185