Journal on Communications ›› 2023, Vol. 44 ›› Issue (5): 193-205.doi: 10.11959/j.issn.1000-436x.2023094

• Papers • Previous Articles     Next Articles

Membership inference attack and defense method in federated learning based on GAN

Jiale ZHANG1,2, Chengcheng ZHU1,2, Xiaobing SUN1,2, Bing CHEN3   

  1. 1 School of Information Engineering, Yangzhou University, Yangzhou 225127, China
    2 Jiangsu Engineering Research Center Knowledge Management and Intelligent Service, Yangzhou 225127, China
    3 College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China
  • Revised:2023-03-06 Online:2023-05-25 Published:2023-05-01
  • Supported by:
    The National Natural Science Foundation of China(62206238);The Natural Science Foundation of Jiangsu Province(BK20220562);The Natural Science Foundation of Jiangsu Higher Education Institutions of China(22KJB520010);The Yangzhou City-Yangzhou University Science and Technology Cooperation Fund Project(YZ2021157);The Yangzhou City-Yangzhou University Science and Technology Cooperation Fund Project(YZ2021158)

Abstract:

Aiming at the problem that the federated learning system was extremely vulnerable to membership inference attacks initiated by malicious parties in the prediction stage, and the existing defense methods were difficult to achieve a balance between privacy protection and model loss.Membership inference attacks and their defense methods were explored in the context of federated learning.Firstly, two membership inference attack methods called class-level attack and user-level attack based on generative adversarial network (GAN) were proposed, where the former was aimed at leaking the training data privacy of all participants, while the latter could specify a specific participant.In addition, a membership inference defense method in federated learning based on adversarial sample (DefMIA) was further proposed, which could effectively defend against membership inference attacks by designing adversarial sample noise addition methods for global model parameters while ensuring the accuracy of federated learning.The experimental results show that class-level and user-level membership inference attack can achieve over 90% attack accuracy in federated learning, while after using the DefMIA method, their attack accuracy is significantly reduced, approaching random guessing (50%).

Key words: federated learning, membership inference attack, generative adversarial network, adversarial example, privacy leakage

CLC Number: 

No Suggested Reading articles found!