网络与信息安全学报 ›› 2023, Vol. 9 ›› Issue (4): 29-39.doi: 10.11959/j.issn.2096-109x.2023051

• 学术论文 • 上一篇    

可逆神经网络的隐私泄露风险评估

何毅凡1,2, 张杰2,3, 张卫明2,3, 俞能海2,3   

  1. 1 中国科学技术大学信息科学技术学院,安徽 合肥 230027
    2 中国科学院电磁空间信息重点实验室,安徽 合肥 230027
    3 中国科学技术大学网络空间安全学院,安徽 合肥 230027
  • 修回日期:2023-02-06 出版日期:2023-08-01 发布日期:2023-08-01
  • 作者简介:何毅凡(1998- ),男,湖北武汉人,中国科学技术大学硕士生,主要研究方向为人工智能安全与隐私保护
    张杰(1995- ),男,河南驻马店人,中国科学技术大学博士生,主要研究方向为人工智能安全、隐私及其版权保护
    张卫明(1976- ),男,河北定州人,中国科学技术大学教授、博士生导师,主要研究方向为信息隐藏、多媒体内容安全、人工智能安全
    俞能海(1964- ),男,安徽无为人,中国科学技术大学教授、博士生导师,主要研究方向为多媒体信息检索、图像处理与视频通信、数字媒体内容安全
  • 基金资助:
    国家自然科学基金(U20B2047);国家自然科学基金(62072421);国家自然科学基金(62002334);国家自然科学基金(62102386);国家自然科学基金(62121002);中国科学技术大学探索基金(YD3480002001);中央高校基本科研业务费专项基金(WK2100000011)

Privacy leakage risk assessment for reversible neural network

Yifan HE1,2, Jie ZHANG2,3, Weiming ZHANG2,3, Nenghai YU2,3   

  1. 1 School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
    2 Key Laboratory of Electro-magnetic Space Information, Chinese Academy of Sciences, Hefei 230027, China
    3 School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230027, China
  • Revised:2023-02-06 Online:2023-08-01 Published:2023-08-01
  • Supported by:
    The National Natural Science Foundation of China(U20B2047);The National Natural Science Foundation of China(62072421);The National Natural Science Foundation of China(62002334);The National Natural Science Foundation of China(62102386);The National Natural Science Foundation of China(62121002);Exploration Fund of University of Science and Technology of China(YD3480002001);Fundamental Research Funds for the Central Universities(WK2100000011)

摘要:

近年来,深度学习已经成为多方领域的核心技术,而深度学习模型的训练过程中往往需要大量的数据,这些数据中可能含有隐私信息,包括个人身份信息(如电话号码、身份证号等)和敏感信息(如金融财务、医疗健康等)。因此,人工智能模型的隐私风险问题成为学术界的研究热点。深度学习模型的隐私研究仅局限于传统神经网络,而很少针对特殊网络结构的新兴网络(如可逆神经网络)。可逆神经网络的上层信息输入可以由下层输出直接得到,直观上讲,该结构保留了更多有关训练数据的信息,相比传统网络具有更大的隐私泄露风险。为此,提出从数据隐私泄露和模型功能隐私泄露两个层面来探讨深度网络的隐私问题,并将该风险评估策略应用到可逆神经网络。具体来说,选取了两种经典的可逆神经网络(RevNet 和i-RevNet),并使用了成员推理攻击、模型逆向攻击、属性推理攻击和模型窃取攻击 4 种攻击手段进行隐私泄露分析。实验结果表明,可逆神经网络在面对数据层面的隐私攻击时存在相比传统神经网络更严重的隐私泄露风险,而在面对模型层面的隐私攻击时存在相似的隐私泄露风险。由于可逆神经网络研究越来越多,目前被广泛被应用于各种任务,这些任务也涉及敏感数据,在实验结果分析基础上提出了一些潜在的解决方法,希望能够应用于未来可逆神经网络的发展。

关键词: 可逆神经网络, 隐私保护, 成员推理攻击, 隐私威胁

Abstract:

In recent years, deep learning has emerged as a crucial technology in various fields.However, the training process of deep learning models often requires a substantial amount of data, which may contain private and sensitive information such as personal identities and financial or medical details.Consequently, research on the privacy risk associated with artificial intelligence models has garnered significant attention in academia.However, privacy research in deep learning models has mainly focused on traditional neural networks, with limited exploration of emerging networks like reversible networks.Reversible neural networks have a distinct structure where the upper information input can be directly obtained from the lower output.Intuitively, this structure retains more information about the training data, potentially resulting in a higher risk of privacy leakage compared to traditional networks.Therefore, the privacy of reversible networks was discussed from two aspects: data privacy leakage and model function privacy leakage.The risk assessment strategy was applied to reversible networks.Two classical reversible networks were selected, namely RevNet and i-RevNet.And four attack methods were used accordingly, namely membership inference attack, model inversion attack, attribute inference attack, and model extraction attack, to analyze privacy leakage.The experimental results demonstrate that reversible networks exhibit more serious privacy risks than traditional neural networks when subjected to membership inference attacks, model inversion attacks, and attribute inference attacks.And reversible networks have similar privacy risks to traditional neural networks when subjected to model extraction attack.Considering the increasing popularity of reversible neural networks in various tasks, including those involving sensitive data, it becomes imperative to address these privacy risks.Based on the analysis of the experimental results, potential solutions were proposed which can be applied to the development of reversible networks in the future.

Key words: reversible neural network, privacy protection, membership inference attack, privacy threat

中图分类号: 

No Suggested Reading articles found!