Label noise analysis meets adversarial training: A defense against label poisoning in federated learning
Knowledge-Based Systems, ISSN: 0950-7051, Vol: 266, Page: 110384
2023
- 23Citations
- 19Usage
- 42Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations23
- Citation Indexes23
- 23
- Usage19
- Abstract Views19
- Captures42
- Readers42
- 42
Article Description
Data decentralization and privacy constraints in federated learning systems withhold user data from the server. As a result, intruders can take advantage of this privacy feature by corrupting the federated network using forged updates obtained on malicious data. This paper proposes a defense mechanism based on adversarial training and label noise analysis to address this problem. To do so, we design a generative adversarial scheme for vaccinating local models by injecting them with artificially-made label noise that resembles backdoor and label flipping attacks. From the perspective of label noise analysis, all poisoned labels can be generated through three different mechanisms. We demonstrate how backdoor and label flipping attacks resemble each of these noise mechanisms and consider them all in the introduced design. In addition, we propose devising noisy-label classifiers for the client models. The combination of these two mechanisms enables the model to learn possible noise distributions, which eliminates the effect of corrupted updates generated due to malicious activities. Moreover, this work conducts a comparative study on state-of-the-art deep noisy label classifiers. The designed framework and selected methods are evaluated for intrusion detection on two internet of things networks. The results indicate the effectiveness of the proposed approach.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S095070512300134X; http://dx.doi.org/10.1016/j.knosys.2023.110384; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85149059722&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S095070512300134X; https://scholar.uwindsor.ca/electricalengpub/187; https://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=1185&context=electricalengpub; https://dx.doi.org/10.1016/j.knosys.2023.110384
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know