LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks

No Thumbnail Available
File version
Author(s)
Ma, M
Zhang, Y
Arachchige, PCM
Zhang, LY
Chhetri, MB
Bai, G
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)

Liu, Joseph

Xiang, Yang

Nepal, Surya

Tsudik, Gene

Date
2023
Size
File type(s)
Location

Melbourne, Australia

License
Abstract

Federated learning (FL) is a widely used distributed machine learning framework. However, recent studies have shown its susceptibility to poisoning membership inference attacks (MIA). In MIA, adversaries maliciously manipulate the local updates on selected samples and share the gradients with the server (i.e., poisoning). Since honest clients perform gradient descent on samples locally, an adversary can distinguish whether the attacked sample is a training sample based on observation of the change of the sample's prediction. This type of attack exacerbates traditional passive MIA, yet the defense mechanisms remain largely unexplored. In this work, we first investigate the effectiveness of the existing server-side robust aggregation algorithms (AGRs), designed to counter general poisoning attacks, in defending against poisoning MIA. We find that they are largely insufficient in mitigating poisoning MIA, as it targets specific victim samples and has minimal impact on model performance, unlike general poisoning. Thus, we propose a new client-side defense mechanism, called LoDen, which leverages the clients' unique ability to detect any suspicious privacy attacks. We theoretically quantify the membership information leaked to the poisoning MIA and provide a bound for this leakage in LoDen. We perform an extensive experimental evaluation on four benchmark datasets against poisoning MIA, comparing LoDen with six state-of-the-art server-side AGRs. LoDen consistently achieves missing rate in detecting poisoning MIA across all settings, and reduces the poisoning MIA success rate to in most cases. The code of LoDen is available at https://github.com/UQ-Trust-Lab/LoDen.

Journal Title
Conference Title

ASIA CCS '23: Proceedings of the 2023 ACM Asia Conference on Computer and Communications Security

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject

Data management and data science

Information security management

Persistent link to this record
Citation

Ma, M; Zhang, Y; Arachchige, PCM; Zhang, LY; Chhetri, MB; Bai, G, LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference Attacks, Proceedings of the ACM Conference on Computer and Communications Security, 2023, pp. 122-135