Chinese Journal of Network and Information Security ›› 2023, Vol. 9 ›› Issue (4): 29-39.doi: 10.11959/j.issn.2096-109x.2023051

• Papers • Previous Articles    

Privacy leakage risk assessment for reversible neural network

Yifan HE1,2, Jie ZHANG2,3, Weiming ZHANG2,3, Nenghai YU2,3   

  1. 1 School of Information Science and Technology, University of Science and Technology of China, Hefei 230027, China
    2 Key Laboratory of Electro-magnetic Space Information, Chinese Academy of Sciences, Hefei 230027, China
    3 School of Cyber Science and Technology, University of Science and Technology of China, Hefei 230027, China
  • Revised:2023-02-06 Online:2023-08-01 Published:2023-08-01
  • Supported by:
    The National Natural Science Foundation of China(U20B2047);The National Natural Science Foundation of China(62072421);The National Natural Science Foundation of China(62002334);The National Natural Science Foundation of China(62102386);The National Natural Science Foundation of China(62121002);Exploration Fund of University of Science and Technology of China(YD3480002001);Fundamental Research Funds for the Central Universities(WK2100000011)

Abstract:

In recent years, deep learning has emerged as a crucial technology in various fields.However, the training process of deep learning models often requires a substantial amount of data, which may contain private and sensitive information such as personal identities and financial or medical details.Consequently, research on the privacy risk associated with artificial intelligence models has garnered significant attention in academia.However, privacy research in deep learning models has mainly focused on traditional neural networks, with limited exploration of emerging networks like reversible networks.Reversible neural networks have a distinct structure where the upper information input can be directly obtained from the lower output.Intuitively, this structure retains more information about the training data, potentially resulting in a higher risk of privacy leakage compared to traditional networks.Therefore, the privacy of reversible networks was discussed from two aspects: data privacy leakage and model function privacy leakage.The risk assessment strategy was applied to reversible networks.Two classical reversible networks were selected, namely RevNet and i-RevNet.And four attack methods were used accordingly, namely membership inference attack, model inversion attack, attribute inference attack, and model extraction attack, to analyze privacy leakage.The experimental results demonstrate that reversible networks exhibit more serious privacy risks than traditional neural networks when subjected to membership inference attacks, model inversion attacks, and attribute inference attacks.And reversible networks have similar privacy risks to traditional neural networks when subjected to model extraction attack.Considering the increasing popularity of reversible neural networks in various tasks, including those involving sensitive data, it becomes imperative to address these privacy risks.Based on the analysis of the experimental results, potential solutions were proposed which can be applied to the development of reversible networks in the future.

Key words: reversible neural network, privacy protection, membership inference attack, privacy threat

CLC Number: 

No Suggested Reading articles found!