Demystifying Uneven Vulnerability of Link Stealing Attacks against Graph Neural Networks
File version
Version of Record (VoR)
Author(s)
Wu, B
Wang, S
Yang, X
Xue, M
Pan, S
Yuan, X
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Honolulu, United States
License
Abstract
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in real-world applications, they have been shown to be vulnerable to a growing number of privacy attacks. For instance, link stealing is a well-known membership inference attack (MIA) on edges that infers the presence of an edge in a GNN’s training graph. Recent studies on independent and identically distributed data (e.g., images) have empirically demonstrated that individuals from different groups suffer from different levels of privacy risks to MIAs, i.e., uneven vulnerability. However, theoretical evidence of such uneven vulnerability is missing. In this paper, we first present theoretical evidence of the uneven vulnerability of GNNs to link stealing attacks, which lays the foundation for demystifying such uneven risks among different groups of edges. We further demonstrate a group-based attack paradigm to expose the practical privacy harm to GNN users derived from the uneven vulnerability of edges. Finally, we empirically validate the existence of obvious uneven vulnerability on nine real-world datasets (e.g., about 25% AUC difference between different groups in the Credit graph). Compared with existing methods, the outperformance of our group-based attack paradigm confirms that customising different strategies for different groups results in more effective privacy attacks.
Journal Title
Conference Title
Proceedings of the 40th International Conference on Machine Learning
Book Title
Edition
Volume
202
Issue
Thesis Type
Degree Program
School
Publisher link
DOI
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
© The Author(s) 2023. The attached file is reproduced here in accordance with the copyright policy of the publisher. For information about this conference please refer to the conference’s website or contact the author(s).
Item Access Status
Note
Access the data
Related item(s)
Subject
Neural networks
Persistent link to this record
Citation
Zhang, H; Wu, B; Wang, S; Yang, X; Xue, M; Pan, S; Yuan, X, Demystifying Uneven Vulnerability of Link Stealing Attacks against Graph Neural Networks, Proceedings of the 40th International Conference on Machine Learning, 2023, 202, pp. 41737-41752