KL-FedDis: A federated learning approach with distribution information sharing using Kullback-Leibler divergence for non-IID data - 03/12/24
Abstract |
Data Heterogeneity or Non-IID (non-independent and identically distributed) data identification is one of the prominent challenges in Federated Learning (FL). In Non-IID data, clients have their own local data, which may not be independently and identically distributed. This arises because clients involved in federated learning typically have their own unique, local datasets that vary significantly due to factors like geographical location, user behaviors, or specific contexts. Model divergence is another critical challenge where the local models trained on different clients, data may diverge significantly but making it difficult for the global model to converge. To identify the non-IID data, few federated learning models have been introduced as FedDis, FedProx and FedAvg, but their accuracy is too low. To address the clients Non-IID data along with ensuring privacy, federated learning emerged with appropriate distribution mechanism is an effective solution. In this paper, a modified FedDis learning method called KL-FedDis is proposed, which incorporates Kullback-Leibler (KL) divergence as the regularization technique. KL-FedDis improves accuracy and computation time over the FedDis and FedAvg technique by successfully maintaining the distribution information and encouraging improved collaboration among the local models by utilizing KL divergence.
El texto completo de este artículo está disponible en PDF.Keywords : Federated learning, FedDis, KL divergence, Regularization
Esquema
Vol 5 - N° 1
Artículo 100182- mars 2025 Regresar al númeroBienvenido a EM-consulte, la referencia de los profesionales de la salud.