Chinese Journal of Network and Information Security ›› 2022, Vol. 8 ›› Issue (6): 102-109.doi: 10.11959/j.issn.2096-109x.2022074

• Papers and Reports • Previous Articles     Next Articles

Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

Bin WANG1,2, Simin LI1, Yaguan QIAN1, Jun ZHANG3, Chaohao LI2, Chenming ZHU3, Hongfei ZHANG3   

  1. 1 Zhejiang University of Science and Technology, Hangzhou 310023, China
    2 Zhejiang Key Laboratory of Multi-dimensional Perception Technology, Application and Cybersecurity, Hangzhou 310052, China
    3 Zhejiang Electronic Information Products Inspection and Research Institute, Hangzhou 310007, China
  • Revised:2022-06-11 Online:2022-12-15 Published:2023-01-16
  • Supported by:
    The National Natural Science Foundation of China(92167203)

Abstract:

Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained model.Specially, the size of trained models generated by the adversarial training will significantly increase for defending against adversarial attacks.This imposes constraints on the usability of adversarial training, especially in a resource-constraint environment.Thus, how to reduce the model size while ensuring the robustness of the trained model is a challenge.To address the above issues, a lightweight defense mechanism was proposed against adversarial attacks, with adaptive pruning and robust distillation.A hierarchically adaptive pruning method was applied to the model generated by adversarial training in advance.Then the trained model was further compressed by a modified robust distillation method.Experimental results on CIFAR-10 and CIFAR-100 datasets showed that our hierarchically adaptive pruning method presented stronger robustness under various FLOP than the existing pruning methods.Moreover, the fusion of pruning and robust distillation presented higher robustness than the state-of-art robust distillation methods.Therefore, the experimental results prove that the proposed method can improve the usability of the adversarial training in the IoT edge computing environment.

Key words: adversarial defenses, pruning, robust distillation, lightweight network

CLC Number: 

No Suggested Reading articles found!