Fine-Grained Poisoning Framework Against Federated Learning
File version
Accepted Manuscript (AM)
Author(s)
Zhang, H
Zhang, Y
Zeng, L
Chen, C
Shao, Q
Wan, W
Hu, S
Zhang, LY
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
License
Abstract
Federated learning (FL) is one of the most widely used distributed machine learning frameworks. However, FL is susceptible to poisoning attacks that can degrade the quality of the global model. Recent studies on fine-grained poisoning attacks highlight a strategic shift where attackers no longer prioritize maximal disruption of the global model, but instead control the degree of model poisoning to maintain stealth and avoid detection. However, research on fine-grained poisoning is still in the infant stage. Numerous fundamental questions have yet to be addressed, including its underlying mechanisms and optimization strategies. To this end, we introduces FGP, the first comprehensive framework for Fine-Grained Poisoning on FL, which allows adversaries to precisely manipulate the global model by strategically inducing accurate and stealthy sub-optimal solution. Fundamentally, FGP innovatively formalizes fine-grained attacks as an optimization problem to minimize the distance between the current global model and the adversary's target (a sub-optimal solution). It then employs a real-time search strategy to dynamically refine the malicious model updates in each round. To ensure optimal attack performance, we further introduce a novel topology-based approach as the error feedback. Additionally, we present a formal convergence analysis of our attacks. Armed with FGP, we conduct a comprehensive evaluation of FL's robustness against fine-grained poisoning across diverse settings. Results demonstrate that FGP significantly outperforms the prior work, achieving an average 6.5x higher attack accuracy.
Journal Title
IEEE Transactions on Dependable and Secure Computing
Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
This work is covered by copyright. You must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a specified licence, refer to the licence for details of permitted re-use. If you believe that this work infringes copyright please make a copyright takedown request using the form at https://www.griffith.edu.au/copyright-matters.
Item Access Status
Note
This publication has been entered in Griffith Research Online as an advance online version.
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Li, M; Zhang, H; Zhang, Y; Zeng, L; Chen, C; Shao, Q; Wan, W; Hu, S; Zhang, LY, Fine-Grained Poisoning Framework Against Federated Learning, IEEE Transactions on Dependable and Secure Computing, 2025