通信学报 ›› 2022, Vol. 43 ›› Issue (10): 94-105.doi: 10.11959/j.issn.1000-436x.2022189

• 学术论文 • 上一篇    下一篇

基于本地化差分隐私的联邦学习方法研究

康海燕, 冀源蕊   

  1. 北京信息科技大学信息管理学院,北京 100192
  • 修回日期:2022-09-23 出版日期:2022-10-25 发布日期:2022-10-01
  • 作者简介:康海燕(1971− ),男,河北灵寿人,博士,北京信息科技大学教授,主要研究方向为网络安全与隐私保护等
    冀源蕊(1997− ),女,宁夏银川人,北京信息科技大学硕士生,主要研究方向为网络安全与隐私保护
  • 基金资助:
    国家社会科学基金资助项目(21BTQ079);国家自然科学基金资助项目(61370139);教育部人文社科基金资助项目(20YJAZH046);北京未来区块链与隐私计算高精尖创新中心基金资助项目

Research on federated learning approach based on local differential privacy

Haiyan KANG, Yuanrui JI   

  1. School of Information Management, Beijing Information Science and Technology University, Beijing 100192, China
  • Revised:2022-09-23 Online:2022-10-25 Published:2022-10-01
  • Supported by:
    The National Social Science Foundation of China(21BTQ079);The National Natural Science Foundation of China(61370139);The Ministry of Education of Humanities and Social Science Project(20YJAZH046);Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing Fund

摘要:

摘 要:联邦学习作为一种协作式机器学习方法,允许用户通过共享模型而不是原始数据进行多方模型训练,在实现隐私保护的同时充分利用用户数据,然而攻击者仍有可能通过窃听联邦学习参与方共享模型来窃取用户信息。为了解决联邦学习训练过程中存在的推理攻击问题,提出一种基于本地化差分隐私的联邦学习(LDP-FL)方法。首先,设计一种本地化差分隐私机制,作用在联邦学习参数的传递过程中,保证联邦模型训练过程免受推理攻击的影响。其次,提出并设计一种适用于联邦学习的性能损失约束机制,通过优化损失函数的约束范围来降低本地化差分隐私联邦模型的性能损失。最后,在MNIST和Fashion MNIST数据集上通过对比实验验证了所提方法的有效性。

关键词: 差分隐私, 联邦学习, 深度学习

Abstract:

As a type of collaborative machine learning framework, federated learning is capable of preserving private data from participants while training the data into useful models.Nevertheless, from a viewpoint of information theory, it is still vulnerable for a curious server to infer private information from the shared models uploaded by participants.To solve the inference attack problem in federated learning training, a local differential privacy federated learning (LDP-FL) approach was proposed.Firstly, to ensure the federated model training process was protected from inference attacks, a local differential privacy mechanism was designed for transmission of parameters in federated learning.Secondly, a performance loss constraint mechanism for federated learning was proposed and designed to reduce the performance loss of local differential privacy federated model by optimizing the constraint range of the loss function.Finally, the effectiveness of proposed LDP-FL approach was verified by comparative experiments on MNIST and Fashion MNIST datasets.

Key words: differential privacy, federated learning, deep learning

中图分类号: 

No Suggested Reading articles found!