KL-FedDis: A Federated Learning Approach with Distribution Information Sharing Using Kullback-Leibler Divergence for Non-IID Data - 28/11/24
Cet article a été publié dans un numéro de la revue, cliquez ici pour y accéder
Abstract |
Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenge in Federated Learning(FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models has been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.
Le texte complet de cet article est disponible en PDF.Keywords : Federated learning, FedDis, KL divergence, Regularization
Plan
Bienvenue sur EM-consulte, la référence des professionnels de santé.