网络与信息安全学报 ›› 2022, Vol. 8 ›› Issue (6): 102-109.doi: 10.11959/j.issn.2096-109x.2022074

• 学术论文 • 上一篇    下一篇

基于剪枝技术和鲁棒蒸馏融合的轻量对抗攻击防御方法

王滨1,2, 李思敏1, 钱亚冠1, 张君3, 李超豪2, 朱晨鸣3, 张鸿飞3   

  1. 1 浙江科技学院,浙江 杭州 310023
    2 浙江省多维感知技术应用与安全重点实验室,浙江 杭州 310052
    3 浙江省电子信息产品检验研究院,浙江 杭州 310007
  • 修回日期:2022-06-11 出版日期:2022-12-15 发布日期:2023-01-16
  • 作者简介:王滨(1978- ),男,山东泗水人,博士,浙江省多维感知技术应用与安全重点实验室研究员、博士生导师,主要研究方向为物联网安全、人工智能安全、密码学
    李思敏(1997- ),男,浙江义乌人,浙江科技学院硕士生,主要研究方向为深度学习、人工智能安全和模型压缩
    钱亚冠(1976- ),男,浙江嵊州人,博士,浙江科技学院理学院教授,主要研究方向为深度学习、人工智能安全
    张君(1980- ),女,河南内乡人,浙江省信息化发展中心高级工程师,主要研究方向为网络与信息安全
    李超豪(1995- ),男,浙江温州人,博士,浙江省多维感知技术应用与安全重点实验室副主任,主要研究方向为物联网安全、人工智能安全、感知对抗安全、数据隐私保护
    朱晨鸣(1981- ),男,浙江杭州人,浙江省电子信息产品检验研究院高级工程师,主要研究方向为网络与信息安全
    张鸿飞(1984- ),男,浙江嘉兴人,浙江省电子信息产品检验研究院工程师,主要研究方向为网络与信息安全
  • 基金资助:
    国家自然科学基金(92167203);浙江省自然科学基金(LZ22F020007)

Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

Bin WANG1,2, Simin LI1, Yaguan QIAN1, Jun ZHANG3, Chaohao LI2, Chenming ZHU3, Hongfei ZHANG3   

  1. 1 Zhejiang University of Science and Technology, Hangzhou 310023, China
    2 Zhejiang Key Laboratory of Multi-dimensional Perception Technology, Application and Cybersecurity, Hangzhou 310052, China
    3 Zhejiang Electronic Information Products Inspection and Research Institute, Hangzhou 310007, China
  • Revised:2022-06-11 Online:2022-12-15 Published:2023-01-16
  • Supported by:
    The National Natural Science Foundation of China(92167203)

摘要:

对抗训练是一类常用的对抗攻击防御方法,其通过将对抗样本纳入训练过程,从而有效抵御对抗攻击。然而,对抗训练模型的鲁棒性通常依赖于网络容量的提升,即对抗训练所获得的网络为防御对抗攻击而大幅提升网络的模型容量,对其可用性造成较大约束。因此,如何在保证对抗训练模型鲁棒性的同时,降低模型容量,提出轻量对抗攻击防御方法是一大挑战。为解决以上问题,提出一种基于剪枝技术和鲁棒蒸馏融合的轻量对抗攻击防御方法。该方法以对抗鲁棒准确率为优化条件,在对预训练的鲁棒对抗模型进行分层自适应剪枝压缩的基础上,再对剪枝后的网络进行基于数据过滤的鲁棒蒸馏,实现鲁棒对抗训练模型的有效压缩,降低其模型容量。在CIFAR-10和CIFAR-100数据集上对所提出的方法进行性能验证与对比实验,实验结果表明,在相同 TRADES 对抗训练下,所提出的分层自适应剪枝技术相较于现有剪枝技术,其剪枝所得到的网络结构在多种 FLOPs 下均表现出更强的鲁棒性。此外,基于剪枝技术和鲁棒蒸馏融合的轻量对抗攻击防御方法相较于其他鲁棒蒸馏方法表现出更高的对抗鲁棒准确率。因此,实验结果证明所提方法在降低对抗训练模型容量的同时,相较于现有方法具有更强的鲁棒性,提升了对抗训练模型在物联网边缘计算环境的适用性。

关键词: 对抗防御, 剪枝, 鲁棒蒸馏, 轻量网络

Abstract:

Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models generated by the adversarial training will significantly increase for defending against adversarial attacks.This imposes constraints on the usability of adversarial training, especially in a resource-constraint environment.Thus, how to reduce the model size while ensuring the robustness of the trained model is a challenge.To address the above issues, a lightweight defense mechanism was proposed against adversarial attacks, with adaptive pruning and robust distillation.A hierarchically adaptive pruning method was applied to the model generated by adversarial training in advance.Then the trained model was further compressed by a modified robust distillation method.Experimental results on CIFAR-10 and CIFAR-100 datasets showed that our hierarchically adaptive pruning method presented stronger robustness under various FLOP than the existing pruning methods.Moreover, the fusion of pruning and robust distillation presented higher robustness than the state-of-art robust distillation methods.Therefore, the experimental results prove that the proposed method can improve the usability of the adversarial training in the IoT edge computing environment.

Key words: adversarial defenses, pruning, robust distillation, lightweight network

中图分类号: 

No Suggested Reading articles found!