Preserving Privacy of Input Features Across All Stages of Collaborative Learning

Loading...
Thumbnail Image
File version

Accepted Manuscript (AM)

Author(s)
Lu, J
Xue, L
Wan, W
Li, M
Zhang, LY
Hu, S
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2023
Size
File type(s)
Location

Wuhan, China

License
Abstract

Collaborative learning is a widely used privacy-preserving distributed training framework where users participate in global training using gradients instead of disclosing their private data. However, gradient inversion attacks have challenged the privacy of this approach by reconstructing private inputs from gradients. While prior works have proposed various defenses against gradient inversion attacks, their privacy assessments have mainly focused on untrained models, lacking consideration for the trained model, which should be the primary focus in collaborative learning. In this context, we first conduct a comprehensive privacy evaluation across all stages of collaborative learning. We uncover the limitations of existing defenses in providing sufficient privacy protection for trained models. To address this challenge, we introduce GradPrivacy, a novel framework tailored to safeguard the privacy of trained models without compromising their performance. GradPrivacy comprises two key components: the amplitude perturbation module, which perturbs gradient parameters associated with critical features to thwart attackers from reconstructing essential input feature information, and the deviation correction module, which effectively maintains model performance by correcting deviations in model update directions from previous rounds. Extensive evaluations demonstrate that GradPrivacy successfully achieves effective privacy preservation, surpassing state-of-the-art methods in terms of the privacy-accuracy trade-off.

Journal Title
Conference Title

2023 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)

Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement

This work is covered by copyright. You must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the document is available under a specified licence, refer to the licence for details of permitted re-use. If you believe that this work infringes copyright please make a copyright takedown request using the form at https://www.griffith.edu.au/copyright-matters.

Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation

Lu, J; Xue, L; Wan, W; Li, M; Zhang, LY; Hu, S, Preserving Privacy of Input Features Across All Stages of Collaborative Learning, 2023 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), 2023, pp. 191-198