AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning
File version
Author(s)
Bai, G
Chamikara, MAP
Ma, M
Shen, L
Wang, J
Nepal, S
Xue, M
Wang, L
Liu, J
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Austin, USA
License
Abstract
The Poisoning Membership Inference Attack (PMIA) is a newly emerging privacy attack that poses a significant threat to federated learning (FL). An adversary conducts data poisoning (i.e., performing adversarial manipulations on training examples) to extract membership information by exploiting the changes in loss resulting from data poisoning. The PMIA significantly exacerbates the traditional poisoning attack that is primarily focused on model corruption. However, there has been a lack of a comprehensive systematic study that thoroughly investigates this topic. In this work, we conduct a benchmark evaluation to assess the performance of PMIA against the Byzantine-robust FL setting that is specifically designed to mitigate poisoning attacks. We find that all existing coordinate-wise averaging mechanisms fail to defend against the PMIA, while the detect-then-drop strategy was proven to be effective in most cases, implying that the poison injection is memorized and the poisonous effect rarely dissipates. Inspired by this observation, we propose AgrEvader, a PMIA that maximizes the adversarial impact on the victim samples while circumventing the detection by Byzantine-robust mechanisms. AgrEvader significantly outperforms existing PMIAs. For instance, AgrEvader achieved a high attack accuracy of between 72.78% (on CIFAR-10) to 97.80% (on Texas100), which is an average accuracy increase of 13.89% compared to the strongest PMIA reported in the literature. We evaluated AgrEvader on five datasets across different domains, against a comprehensive list of threat models, which included black-box, gray-box and white-box models for targeted and non-targeted scenarios. AgrEvader demonstrated consistent high accuracy across all settings tested. The code is available at: https://github.com/PrivSecML/AgrEvader.
Journal Title
Conference Title
WWW '23: Proceedings of the ACM Web Conference 2023
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Data management and data science
Data security and protection
Persistent link to this record
Citation
Zhang, Y; Bai, G; Chamikara, MAP; Ma, M; Shen, L; Wang, J; Nepal, S; Xue, M; Wang, L; Liu, J, AgrEvader: Poisoning Membership Inference against Byzantine-robust Federated Learning, WWW '23: Proceedings of the ACM Web Conference 2023, 2023, pp. 2371-2382