通信学报 ›› 2023, Vol. 44 ›› Issue (5): 193-205.doi: 10.11959/j.issn.1000-436x.2023094

• 学术论文 • 上一篇    下一篇

基于GAN的联邦学习成员推理攻击与防御方法

张佳乐1,2, 朱诚诚1,2, 孙小兵1,2, 陈兵3   

  1. 1 扬州大学信息工程学院,江苏 扬州 225127
    2 江苏省知识管理与智能服务工程研究中心,江苏 扬州 225127
    3 南京航空航天大学计算机科学与技术学院,江苏 南京 211106
  • 修回日期:2023-03-06 出版日期:2023-05-25 发布日期:2023-05-01
  • 作者简介:张佳乐(1994- ),男,安徽蚌埠人,博士,扬州大学讲师、硕士生导师,主要研究方向为人工智能安全、联邦学习、数据隐私保护等
    朱诚诚(2000- ),男,安徽临泉人,扬州大学硕士生,主要研究方向为联邦学习安全与隐私保护
    孙小兵(1985- ),男,江苏姜堰人,博士,扬州大学教授、博士生导师,主要研究方向为软件安全、人工智能安全、区块链安全等
    陈兵(1970- ),男,江苏南通人,博士,南京航空航天大学教授、博士生导师,主要研究方向为无线网络、人工智能安全、网络空间安全、智能无人系统等
  • 基金资助:
    国家自然科学基金资助项目(62206238);江苏省自然科学基金资助项目(BK20220562);江苏省高等学校基础科学(自然科学)研究基金资助项目(22KJB520010);扬州市科技计划项目-市校合作专项基金资助项目(YZ2021157);扬州市科技计划项目-市校合作专项基金资助项目(YZ2021158)

Membership inference attack and defense method in federated learning based on GAN

Jiale ZHANG1,2, Chengcheng ZHU1,2, Xiaobing SUN1,2, Bing CHEN3   

  1. 1 School of Information Engineering, Yangzhou University, Yangzhou 225127, China
    2 Jiangsu Engineering Research Center Knowledge Management and Intelligent Service, Yangzhou 225127, China
    3 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
  • Revised:2023-03-06 Online:2023-05-25 Published:2023-05-01
  • Supported by:
    The National Natural Science Foundation of China(62206238);The Natural Science Foundation of Jiangsu Province(BK20220562);The Natural Science Foundation of Jiangsu Higher Education Institutions of China(22KJB520010);The Yangzhou City-Yangzhou University Science and Technology Cooperation Fund Project(YZ2021157);The Yangzhou City-Yangzhou University Science and Technology Cooperation Fund Project(YZ2021158)

摘要:

针对联邦学习系统极易遭受由恶意参与方在预测阶段发起的成员推理攻击行为,以及现有的防御方法在隐私保护和模型损失之间难以达到平衡的问题,探索了联邦学习中的成员推理攻击及其防御方法。首先提出2种基于生成对抗网络(GAN)的成员推理攻击方法:类级和用户级成员推理攻击,其中,类级成员推理攻击旨在泄露所有参与方的训练数据隐私,用户级成员推理攻击可以指定某一个特定的参与方;此外,进一步提出一种基于对抗样本的联邦学习成员推理防御方法(DefMIA),通过设计针对全局模型参数的对抗样本噪声添加方法,能够在保证联邦学习准确率的同时,有效防御成员推理攻击。实验结果表明,类级和用户级成员推理攻击可以在联邦学习中获得超过90%的攻击精度,而在使用DefMIA方法后,其攻击精度明显降低,接近于随机猜测(50%)。

关键词: 联邦学习, 成员推理攻击, 生成对抗网络, 对抗样本, 隐私泄露

Abstract:

Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage, and the existing defense methods were difficult to achieve a balance between privacy protection and model loss.Membership inference attacks and their defense methods were explored in the context of federated learning.Firstly, two membership inference attack methods called class-level attack and user-level attack based on generative adversarial network (GAN) were proposed, where the former was aimed at leaking the training data privacy of all participants, while the latter could specify a specific participant.In addition, a membership inference defense method in federated learning based on adversarial sample (DefMIA) was further proposed, which could effectively defend against membership inference attacks by designing adversarial sample noise addition methods for global model parameters while ensuring the accuracy of federated learning.The experimental results show that class-level and user-level membership inference attack can achieve over 90% attack accuracy in federated learning, while after using the DefMIA method, their attack accuracy is significantly reduced, approaching random guessing (50%).

Key words: federated learning, membership inference attack, generative adversarial network, adversarial example, privacy leakage

中图分类号: 

No Suggested Reading articles found!