When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning
File version
Version of Record (VoR)
Author(s)
Hu, Hongsheng
Luo, Wei
Zhang, Zhaoxi
Zhang, Yanjun
Yuan, Haizhuan
Zhang, Leo Yu
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Hanoi, Vietnam
Abstract
With the rapid advancement of deep learning technology, pre-trained encoder models have demonstrated exceptional feature extraction capabilities, playing a pivotal role in the research and application of deep learning. However, their widespread use has raised significant concerns about the risk of training data privacy leakage. This paper systematically investigates the privacy threats posed by membership inference attacks (MIAs) targeting encoder models, focusing on contrastive learning frameworks. Through experimental analysis, we reveal the significant impact of model architecture complexity on membership privacy leakage: As more advanced encoder frameworks improve feature‐extraction performance, they simultaneously exacerbate privacy‐leakage risks. Furthermore, this paper proposes a novel membership inference attack method based on the p-norm of feature vectors, termed the Embedding Lp-Norm Likelihood Attack (LpLA). This method infers membership status, by leveraging the statistical distribution characteristics of the p-norm of feature vectors. Experimental results across multiple datasets and model architectures demonstrate that LpLA outperforms existing methods in attack performance and robustness, particularly under limited attack knowledge and query volumes. This study not only uncovers the potential risks of privacy leakage in contrastive learning frameworks, but also provides a practical basis for privacy protection research in encoder models. We hope that this work will draw greater attention to the privacy risks associated with self-supervised learning models and shed light on the importance of a balance between model utility and training data privacy. Our code is publicly available at: https://github.com/SeroneySun/LpLA_code.
Journal Title
Conference Title
ASIA CCS '25: Proceedings of the 20th ACM Asia Conference on Computer and Communications Security
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
© 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution 4.0 International License.
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Sun, R; Hu, H; Luo, W; Zhang, Z; Zhang, Y; Yuan, H; Zhang, LY, When Better Features Mean Greater Risks: The Performance-Privacy Trade-Off in Contrastive Learning, ASIA CCS '25: Proceedings of the 20th ACM Asia Conference on Computer and Communications Security, 2025, pp. 488-500