Robust federated contrastive recommender system against targeted model poisoning attack

Loading...
Thumbnail Image
File version

Version of Record (VoR)

Author(s)
Yuan, W
Yang, C
Qu, L
Ye, G
Nguyen, QVH
Yin, H
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2025
Size
File type(s)
Location
Abstract

Federated recommender systems (FedRecs) have garnered increasing attention recently, thanks to their privacy-preserving benefits. However, the decentralized and open characteristics of current FedRecs present at least two dilemmas. First, the performance of FedRecs is compromised due to highly sparse on-device data for each client. Second, the system’s robustness is undermined by the vulnerability to model poisoning attacks launched by malicious users. In this paper, we introduce a novel contrastive learning framework designed to fully leverage the client’s sparse data through embedding augmentation, referred to as CL4FedRec. Unlike previous contrastive learning approaches in FedRecs that necessitate clients to share their private parameters, our CL4FedRec aligns with the basic FedRec learning protocol, ensuring compatibility with most existing FedRec implementations. We then evaluate the robustness of FedRecs equipped with CL4FedRec by subjecting it to several state-of-the-art model poisoning attacks. Surprisingly, our observations reveal that contrastive learning tends to exacerbate the vulnerability of FedRecs to these attacks. This is attributed to the enhanced embedding uniformity, making the polluted target item embedding easily proximate to popular items. Based on this insight, we propose an enhanced and robust version of CL4FedRec (rCL4FedRec) by introducing a regularizer to maintain the distance among item embeddings with different popularity levels. Extensive experiments conducted on four commonly used recommendation datasets demonstrate that rCL4FedRec significantly enhances both the model’s performance and the robustness of FedRecs.

Journal Title

Science China Information Sciences

Conference Title
Book Title
Edition
Volume

68

Issue

4

Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)

ARC

Grant identifier(s)

DP240101108

Rights Statement
Rights Statement

© The Author(s) 2025. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation

Yuan, W; Yang, C; Qu, L; Ye, G; Nguyen, QVH; Yin, H, Robust federated contrastive recommender system against targeted model poisoning attack, Science China Information Sciences, 2025, 68 (4), pp. 140103

Collections