网络与信息安全学报 ›› 2021, Vol. 7 ›› Issue (1): 113-120.doi: 10.11959/j.issn.2096-109x.2021012

• 学术论文 • 上一篇    下一篇

面向对抗样本攻击的移动目标防御

王滨1,2,3, 陈靓1, 钱亚冠1, 郭艳凯1, 邵琦琦1, 王佳敏1   

  1. 1 浙江科技学院大数据学院,浙江 杭州 310023
    2 浙江大学电气工程学院,浙江 杭州 310058
    3 海康威视数字技术有限公司网络与信息安全实验室,浙江 杭州 310058
  • 修回日期:2020-12-08 出版日期:2021-02-15 发布日期:2021-02-01
  • 作者简介:王滨(1978- ),男,山东泗水人,海康威视数字技术有限公司研究员,主要研究方向为人工智能安全、物联网安全、密码学等。
    陈靓(1995- ),男,江苏无锡人,浙江科技学院硕士生,主要研究方向为对抗深度学习、神经网络压缩。
    钱亚冠(1976- ),男,浙江嵊州人,博士,浙江科技学院副教授,主要研究方向为人工智能安全、机器学习与大数据处理、对抗性机器学习。
    郭艳凯(1994- ),男,河南驻马店人,浙江科技学院硕士生,主要研究方向为深度学习图像处理、对抗深度学习。
    邵琦琦(1997- ),女,浙江永嘉人,浙江科技学院硕士生,主要研究方向为深度学习安全。
    王佳敏(1993- ),女,浙江新沂人,浙江科技学院硕士生,主要研究方向为深度学习安全。
  • 基金资助:
    2019年度杭州市领军型创新团队项目;国家重点研发计划(2018YFB2100400);国家电网公司总部科技项目(5700-202019187A-0-0-00)

Moving target defense against adversarial attacks

Bin WANG1,2,3, Liang CHEN1, Yaguan QIAN1, Yankai GUO1, Qiqi SHAO1, Jiamin WANG1   

  1. 1 College of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
    2 College of Electrical Engineering, Zhejiang University, Hangzhou 310058, China
    3 Network and Information Security Laboratory, Hangzhou Hikvision Digital Technology Co., LTD, Hangzhou 310058, China
  • Revised:2020-12-08 Online:2021-02-15 Published:2021-02-01
  • Supported by:
    Science and Technology Project of State Grid Corporation of China;TheNational Key R&D Program of China(2018YFB2100400);Hangzhou City Leading Innovation Team Project in 2019(5700-202019187A-0-0-00)

摘要:

深度神经网络已被成功应用于图像分类,但研究表明深度神经网络容易受到对抗样本的攻击。提出一种移动目标防御方法,通过 Bayes-Stackelberg 博弈策略动态切换模型,使攻击者无法持续获得一致信息,从而阻断其构建对抗样本。成员模型的差异性是提高移动目标防御效果的关键,将成员模型之间的梯度一致性作为度量,构建新的损失函数进行训练,可有效提高成员模型之间的差异性。实验结果表明,所提出的方法能够提高图像分类系统的移动目标防御性能,显著降低对抗样本的攻击成功率。

关键词: 对抗样本, 移动目标防御, Bayes-Stackelberg博弈

Abstract:

Deep neural network has been successfully applied to image classification, but recent research work shows that deep neural network is vulnerable to adversarial attacks.A moving target defense method was proposed by means of dynamic switching model with a Bayes-Stackelberg game strategy, which could prevent an attacker from continuously obtaining consistent information and thus blocked its construction of adversarial examples.To improve the defense effect of the proposed method, the gradient consistency among the member models was taken as a measure to construct a new loss function in training for improving the difference among the member models.Experimental results show that the proposed method can improve the moving target defense performance of the image classification system and significantly reduce the attack success rate against the adversarial examples.

Key words: adversarial examples, moving target defense, Bayes-Stackelberg game

中图分类号: 

No Suggested Reading articles found!