Chinese Journal of Network and Information Security ›› 2022, Vol. 8 ›› Issue (1): 139-150.doi: 10.11959/j.issn.2096-109x.2022011

• Research and Development • Previous Articles     Next Articles

Privacy-preserving federated learning framework with irregular-majority users

Qianxin CHEN1,2, Renwan BI1,2, Jie LIN1, Biao JIN1, Jinbo XIONG1,2   

  1. 1 College of Computer and Cyber Security, Fujian Normal University, Fuzhou 350117, China
    2 Fujian Provincial Key Laboratory of Network Security and Cryptology, Fujian Normal University, Fuzhou 350007, China
  • Revised:2022-01-05 Online:2022-02-15 Published:2022-02-01
  • Supported by:
    The National Natural Science Foundation of China(61872088);The National Natural Science Foundation of China(61872090);The National Natural Science Foundation of China(U1905211);The Natural Science Foundation of Fujian Province(2019J01276)

Abstract:

In response to the existing problems that the federated learning might lead to the reduction of aggregation efficiency by handing the majority of irregular users and the leak of parameter privacy by adopting plaintext communication, a framework of privacy-preserving robust federated learning was proposed for ensuring the robustness of the irregular user based on the designed security division protocol.PPRFL could enable the model and its related information to aggregate in ciphertext on the edge server facilitate users to calculate the model reliability locally for reducing the additional communication overhead caused by the adoption of the security multiplication protocol in conventional methods, apart from lowering the high computational overhead resulted from homomorphic encryption with outsourcing computing to two edge servers.Based on this, user could calculate the loss value of the model through jointly using the verification sets issued by the edge server and that held locally after parameter updating of the local model.Then the model reliability could be dynamically updated as the model weight together with the historic information of the loss value.Further, the model weight was scaled under the guidance of prior knowledge, and the ciphertext model and ciphertext weight information are sent to the edge server to aggregate and update the global model parameters, ensuring that global model changes are contributed by high-quality data users, and improving the convergence speed.Through the security analysis of the Hybrid Argument model, the demonstration shows that PPRFL can effectively protect the privacy of model parameters and intermediate interaction parameters including user reliability.The experimental results show that the PPRFL scheme could still achieve the accuracy of 92% when all the participants in the federated aggregation task are irregular users, with the convergence efficiency 1.4 times higher than that of the PPFDL.Besides, the PPRFL scheme could still reach the accuracy of 89% when training data possessed by 80% of the users in the federated aggregation task were noise data, with the convergence efficiency 2.3 times higher than that of the PPFDL.

Key words: federated learning, privacy-preserving, secure aggregation, irregular-majority users, security division protocol

CLC Number: 

No Suggested Reading articles found!