Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
File version
Author(s)
Hu, S
Zhao, R
Zhang, LY
Hu, S
Sun, L
Yao, D
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Vancouver, Canada
License
Abstract
Collaborative learning (CL) is a distributed learning framework that aims to protect user privacy by allowing users to jointly train a model by sharing their gradient updates only. However, gradient inversion attacks (GIAs), which recover users’ training data from shared gradients, impose severe privacy threats to CL. Existing defense methods adopt different techniques, e.g., differential privacy, cryptography, and perturbation defenses, to defend against the GIAs. Nevertheless, all current defense methods suffer from a poor tradeoff between privacy, utility, and efficiency. To mitigate the weaknesses of existing solutions, we propose a novel defense method, Dual Gradient Pruning (DGP), based on gradient pruning, which can improve communication efficiency while preserving the utility and privacy of CL. Specifically, DGP slightly changes gradient pruning with a stronger privacy guarantee. And DGP can also significantly improve communication efficiency with a theoretical analysis of its convergence and generalization. Our extensive experiments show that DGP can effectively defend against the most powerful GIAs and reduce the communication cost without sacrificing the model’s utility.
Journal Title
Conference Title
Proceedings of the AAAI Conference on Artificial Intelligence
Book Title
Edition
Volume
38
Issue
6
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Xue, L; Hu, S; Zhao, R; Zhang, LY; Hu, S; Sun, L; Yao, D, Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks, Proceedings of the AAAI Conference on Artificial Intelligence, 2024, 38 (6), pp. 6404-6412