KL-FedDis: A federated learning approach with distribution information sharing using Kullback-Leibler divergence for non-IID data - 03/12/24
Abstract |
Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenges in Federated Learning (FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models have been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.
Le texte complet de cet article est disponible en PDF.Keywords : Federated learning, FedDis, KL divergence, Regularization
Plan
Vol 5 - N° 1
Article 100182- mars 2025 Retour au numéroBienvenue sur EM-consulte, la référence des professionnels de santé.