Practical Poisoning Attacks with Limited Byzantine Clients in Clustered Federated Learning

No Thumbnail Available
File version
Author(s)
Vo, V
Ma, M
Bai, G
Ko, R
Neplal, S
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2025
Size
File type(s)
Location

San Francisco, United States

License
Abstract

The presence of non-independent and identically distributed (non-IID) data among clients poses a critical challenge to the deployment of Federated Learning (FL) in practice. In response, state-of-the-art solutions known as Clustered Federated Learning (CFL) schemes, such as FL+HC and PACFL, have emerged to tackle this issue. Their main innovation is to cluster non-IID clients into groups of IID clients, such that techniques designated for IID scenarios can be easily applicable. Nonetheless, the robustness of CFL schemes remains largely unexplored, and existing Byzantine-robust defence mechanisms prove inadequate in CFL schemes and non-IID data settings. In this work, we present novel powerful CFL-specific poisoning attacks, named Cluster-U-M and Cluster-U-D. These attacks are designed to significantly reduce the model utility, measured in terms of test accuracy, for benign clients participating in the CFL schemes. Notably, these attacks remain agnostic, requiring no adversarial knowledge regarding defense solutions and benign clients themselves. At a high level, the attacks involve two steps, including cluster poisoning attacks and client-drift exploitation within clusters. The former induces the grouping of clients with different training distributions, and the latter amplifies the difference between each client's optimum and their group's average aggregation. We extensively evaluate the impact of these attacks using FL+HC and PACFL schemes on both small and large scales. The evaluation results demonstrate that the attacks can compromise up to 54% of clients, with a maximum accuracy loss of 48%. Even with only 0.1% clients compromised, which represents a minimal practical adversarial effort, these attacks can still victimize around 4% clients. We evaluate the effectiveness of two state-of-the-art Byzantine-robust defence mechanisms, i.e., FLTrust and FLAME, in countering Cluster-U-M and Cluster-U-D, and find that the attacks can victimize up to 38% of clients with an accuracy loss of 18-38% under the FL+HC scheme.

Journal Title
Conference Title

2025 IEEE Symposium on Security and Privacy (SP)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation

Vo, V; Ma, M; Bai, G; Ko, R; Neplal, S, Practical Poisoning Attacks with Limited Byzantine Clients in Clustered Federated Learning, 2025 IEEE Symposium on Security and Privacy (SP), 2025, pp. 1751-1769