A polynomial proxy model approach to verifiable decentralized federated learning
Scientific Reports, ISSN: 2045-2322, Vol: 14, Issue: 1, Page: 28786
2024
- 2Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Captures2
- Readers2
Article Description
Decentralized Federated Learning improves data privacy and eliminates single points of failure by removing reliance on centralized storage and model aggregation in distributed computing systems. Ensuring the integrity of computations during local model training is a significant challenge, especially before sharing gradient updates from each local client. Current methods for ensuring computation integrity often involve patching local models to implement cryptographic techniques, such as Zero-Knowledge Proofs. However, this approach becomes highly complex and sometimes impractical for large-scale models that use techniques such as random dropouts to improve training convergence. These random dropouts create non-deterministic behavior, making it challenging to verify model updates under deterministic protocols. We propose ProxyZKP, a novel framework combining Zero-Knowledge Proofs with polynomial proxy models to provide computation integrity in local training to address this issue. Each local node combines a private model for online deep learning applications and a proxy model that mediates decentralized model training by exchanging gradient updates. The multivariate polynomial nature of proxy models facilitates the application of Zero-Knowledge Proofs. These proofs verify the computation integrity of updates from each node without disclosing private data. Experimental results indicate that ProxyZKP significantly reduces computational load. Specifically, ProxyZKP achieves proof generation times that are 30–50% faster compared to established methods like zk-SNARKs and Bulletproofs. This improvement is largely due to the high parallelization potential of the univariate polynomial decomposition approach. Additionally, integrating Differential Privacy into the ProxyZKP framework reduces the risk of Gradient Inversion attacks by adding calibrated noise to the gradients, while maintaining competitive model accuracy. The results demonstrate that ProxyZKP is a scalable and efficient solution for ensuring training integrity in decentralized federated learning environments, particularly in scenarios with frequent model updates and the need for strong model scalability.
Bibliographic Details
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know