PlumX Metrics
Embed PlumX Metrics

Privacy-Enhanced Federated Learning: A Restrictively Self-Sampled and Data-Perturbed Local Differential Privacy Method

Electronics (Switzerland), ISSN: 2079-9292, Vol: 11, Issue: 23
2022
  • 7
    Citations
  • 0
    Usage
  • 3
    Captures
  • 1
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Citations
    7
    • Citation Indexes
      7
  • Captures
    3
  • Mentions
    1
    • News Mentions
      1
      • News
        1

Most Recent News

Researchers from Northeastern University Detail Findings in Electronics (Privacy-Enhanced Federated Learning: A Restrictively Self-Sampled and Data-Perturbed Local Differential Privacy Method)

2022 DEC 22 (NewsRx) -- By a News Reporter-Staff News Editor at Electronics Daily -- A new study on electronics is now available. According to

Article Description

As a popular distributed learning framework, federated learning (FL) enables clients to conduct cooperative training without sharing data, thus having higher security and enjoying benefits in processing large-scale, high-dimensional data. However, by sharing parameters in the federated learning process, the attacker can still obtain private information from the sensitive data of participants by reverse parsing. Local differential privacy (LDP) has recently worked well in preserving privacy for federated learning. However, it faces the inherent problem of balancing privacy, model performance, and algorithm efficiency. In this paper, we propose a novel privacy-enhanced federated learning framework (Optimal LDP-FL) which achieves local differential privacy protection by the client self-sampling and data perturbation mechanisms. We theoretically analyze the relationship between the model accuracy and client self-sampling probability. Restrictive client self-sampling technology is proposed which eliminates the randomness of the self-sampling probability settings in existing studies and improves the utilization of the federated system. A novel, efficiency-optimized LDP data perturbation mechanism (Adaptive-Harmony) is also proposed, which allows an adaptive parameter range to reduce variance and improve model accuracy. Comprehensive experiments on the MNIST and Fashion MNIST datasets show that the proposed method can significantly reduce computational and communication costs with the same level of privacy and model utility.

Bibliographic Details

Jianzhe Zhao; Ronglin Zhang; Wuganjing Song; Jiali Zheng; Jingran Feng; Mengbo Yang; Stan Matwin

MDPI AG

Engineering; Computer Science

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know