FedEL: Federated ensemble learning for non-iid data
Expert Systems with Applications, ISSN: 0957-4174, Vol: 237, Page: 121390
2024
- 3Citations
- 8Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Federated learning (FL) is a joint training pattern that fully utilizes data information whereas protecting data privacy. A key challenge in FL is statistical heterogeneity, which arises on account of the heterogeneity of local data distributions among clients, leading to inconsistency in local optimization goals and ultimately reducing the performance of globally aggregated models. We propose the Federated Ensemble Learning (FedEL), which makes full use of the heterogeneity of data distribution among clients to train a group of weak learners with diversity to construct a global model, which is a novel solution to the non-independent identical distribution (non-IID) problem. Experiments demonstrate that the proposed FedEL can improve performance in non-IID data scenarios. Even under extreme statistical heterogeneity, the average accuracy of FedEL is 3.54% higher than the state-of-the-art FL method. Moreover, the proposed FedEL reduces model storage and reasoning costs compared with traditional ensemble learning. The proposed FedEL demonstrates good generalization ability in experiments across different datasets, including natural scene image datasets and medical image datasets.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0957417423018924; http://dx.doi.org/10.1016/j.eswa.2023.121390; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85170637378&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0957417423018924; https://dx.doi.org/10.1016/j.eswa.2023.121390
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know