Hierarchical Federated Learning in MEC Networks with Knowledge Distillation
File version
Author(s)
Tong, Ngoc Anh
Nguyen, Binh P
Nguyen, Quoc Viet Hung
Nguyen, Phi Le
Huynh, Thanh Trung
Griffith University Author(s)
Primary Supervisor
Other Supervisors
Editor(s)
Date
Size
File type(s)
Location
Yokohama, Japan
License
Abstract
Modern automobiles are equipped with advanced computing capabilities, allowing them to become powerful computing units capable of processing a large amount of data and training machine learning models. However, machine learning algorithms typically require a large centralized dataset, raising concerns about users’ privacy. Federated Learning (FL) is a distributed machine learning paradigm that tackles this problem, allowing intelligent vehicles to collaboratively train machine learning models locally without having to compromise their private data. Multiple works have concentrated on applying Federated Learning on Mobile Edge Computing (MEC) networks with a 3-tier architecture consisting of mobile clients, edge servers, and cloud servers, where the edge server aggregates its local set of clients, and the cloud server aggregates edge servers to learn a global model. This approach helps reduce the expensive communication costs to the far-away cloud server. However, this 3-tier paradigm faces several challenges, a notable one being clients’ constant mobility, leading to regional edges having a fluctuating set of participating clients at each round, which we refer to as distribution drift. This phenomenon introduces instability to the local training process, leading to suboptimal accuracy and convergence. As a solution, we propose a local training process based on the knowledge distillation mechanism. Specifically, we employ the global model and an ensemble of historical regional models from the edge servers as sources of knowledge to guide the local training process, preventing the local models from drifting away from the global knowledge and preserving information from clients that left the region. Experimental results showed that the proposed method helps achieve better performance compared to other baselines.
Journal Title
Conference Title
2024 International Joint Conference on Neural Networks (IJCNN)
Book Title
Edition
Volume
Issue
Thesis Type
Degree Program
School
Publisher link
Patent number
Funder(s)
Grant identifier(s)
Rights Statement
Rights Statement
Item Access Status
Note
Access the data
Related item(s)
Subject
Persistent link to this record
Citation
Nguyen, TD; Tong, NA; Nguyen, BP; Nguyen, QVH; Nguyen, PL; Huynh, TT, Hierarchical Federated Learning in MEC Networks with Knowledge Distillation, 2024 International Joint Conference on Neural Networks (IJCNN), 2024