PlumX Metrics
Embed PlumX Metrics

Label noise analysis meets adversarial training: A defense against label poisoning in federated learning

Knowledge-Based Systems, ISSN: 0950-7051, Vol: 266, Page: 110384
2023
  • 23
    Citations
  • 19
    Usage
  • 42
    Captures
  • 0
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

Article Description

Data decentralization and privacy constraints in federated learning systems withhold user data from the server. As a result, intruders can take advantage of this privacy feature by corrupting the federated network using forged updates obtained on malicious data. This paper proposes a defense mechanism based on adversarial training and label noise analysis to address this problem. To do so, we design a generative adversarial scheme for vaccinating local models by injecting them with artificially-made label noise that resembles backdoor and label flipping attacks. From the perspective of label noise analysis, all poisoned labels can be generated through three different mechanisms. We demonstrate how backdoor and label flipping attacks resemble each of these noise mechanisms and consider them all in the introduced design. In addition, we propose devising noisy-label classifiers for the client models. The combination of these two mechanisms enables the model to learn possible noise distributions, which eliminates the effect of corrupted updates generated due to malicious activities. Moreover, this work conducts a comparative study on state-of-the-art deep noisy label classifiers. The designed framework and selected methods are evaluated for intrusion detection on two internet of things networks. The results indicate the effectiveness of the proposed approach.

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know