Certified Unlearning for Federated Recommendation

No Thumbnail Available
File version
Author(s)
Huynh, Thanh Trung
Nguyen, Trong Bang
Nguyen, Thanh Toan
Nguyen, Phi Le
Yin, Hongzhi
Nguyen, Quoc Viet Hung
Nguyen, Thanh Tam
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
2024
Size
File type(s)
Location
License
Abstract

Recommendation systems play a crucial role in providing web-based suggestion utilities by leveraging user behavior, preferences, and interests. In the context of privacy concerns and the proliferation of handheld devices, federated recommender systems have emerged as a promising solution. These systems allow each client to train a local model and exchange only the model updates with a central server, thus preserving data privacy. However, certain use cases necessitate the deduction of contributions from specific clients, a process known as “unlearning”. Existing machine unlearning methods are designed for centralized settings and do not cater to the collaborative nature of recommendation systems, thereby overlooking their unique characteristics. This paper proposes CFRU, a novel federated recommendation unlearning model that enables efficient and certified removal of target clients from the global model. Instead of retraining the model, our approach rolls back and eliminates the historical updates associated with the target client. To efficiently store the learning process's historical updates, we propose sampling strategies that reduce the number of historical updates, retaining only the most significant ones. Furthermore, we analyze the potential bias introduced by the removal of target clients’ updates at each training round and establish an estimation using the Lipschitz condition. Leveraging this estimation, we propose an efficient iterative scheme to accumulate the bias across all rounds, compensating for the removed updates from the global model and recovering its utility without requiring post-training steps. Extensive experiments conducted on two real-world datasets, incorporating two poison attack scenarios, have shown that our unlearning technique can achieve a model quality that is 99.3% equivalent to retraining the model from scratch while performing up to 1000 times faster.

Journal Title

ACM Transactions on Information Systems

Conference Title
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)

ARC

Grant identifier(s)

DP240101108

DE200101465

Rights Statement
Rights Statement
Item Access Status
Note

This publication has been entered in Griffith Research Online as an advance online version.

Access the data
Related item(s)
Subject

Information and computing sciences

Data management and data science

Persistent link to this record
Citation

Huynh, TT; Nguyen, TB; Nguyen, TT; Nguyen, PL; Yin, H; Nguyen, QVH; Nguyen, TT, Certified Unlearning for Federated Recommendation, ACM Transactions on Information Systems, 2024

Collections