Chinese Journal of Network and Information Security ›› 2021, Vol. 7 ›› Issue (1): 113-120.doi: 10.11959/j.issn.2096-109x.2021012

• Papers • Previous Articles     Next Articles

Moving target defense against adversarial attacks

Bin WANG1,2,3, Liang CHEN1, Yaguan QIAN1, Yankai GUO1, Qiqi SHAO1, Jiamin WANG1   

  1. 1 College of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
    2 College of Electrical Engineering, Zhejiang University, Hangzhou 310058, China
    3 Network and Information Security Laboratory, Hangzhou Hikvision Digital Technology Co., LTD, Hangzhou 310058, China
  • Revised:2020-12-08 Online:2021-02-15 Published:2021-02-01
  • Supported by:
    Science and Technology Project of State Grid Corporation of China;TheNational Key R&D Program of China(2018YFB2100400);Hangzhou City Leading Innovation Team Project in 2019(5700-202019187A-0-0-00)

Abstract:

Deep neural network has been successfully applied to image classification, but recent research work shows that deep neural network is vulnerable to adversarial attacks.A moving target defense method was proposed by means of dynamic switching model with a Bayes-Stackelberg game strategy, which could prevent an attacker from continuously obtaining consistent information and thus blocked its construction of adversarial examples.To improve the defense effect of the proposed method, the gradient consistency among the member models was taken as a measure to construct a new loss function in training for improving the difference among the member models.Experimental results show that the proposed method can improve the moving target defense performance of the image classification system and significantly reduce the attack success rate against the adversarial examples.

Key words: adversarial examples, moving target defense, Bayes-Stackelberg game

CLC Number: 

No Suggested Reading articles found!