Chinese Journal of Network and Information Security ›› 2020, Vol. 6 ›› Issue (4): 67-76.doi: 10.11959/j.issn.2096-109x.2020052

• Papers • Previous Articles     Next Articles

Improve the robustness of algorithm under adversarial environment by moving target defense

Kang HE1,2,Yuefei ZHU1,2(),Long LIU1,2,Bin LU1,2,Bin LIU1,2   

  1. 1 Cyberspace Security Institute,Information Engineering University,Zhengzhou 450001,China
    2 State Key Laboratory of Mathematical Engineering and Advanced Computing,Zhengzhou 450001,China
  • Revised:2020-02-04 Online:2020-08-15 Published:2020-08-13
  • Supported by:
    The National Key R&D Program of China(2016YFB0801505);Cutting-edge Science and Technology Innovation Project of the Key R&D Program of China(2019QY1305)

Abstract:

Traditional machine learning models works in peace environment,assuming that training data and test data share the same distribution.However,the hypothesis does not hold in areas like malicious document detection.The enemy attacks the classification algorithm by modifying the test samples so that the well-constructed malicious samples can escape the detection by machine learning models.To improve the security of machine learning algorithms,moving target defense (MTD) based method was proposed to enhance the robustness.Experimental results show that the proposed method could effectively resist the evasion attack to detection algorithm by dynamic transformation in the stages of algorithm model,feature selection and result output.

Key words: machine learning, algorithm robustness, moving target defense, dynamic transformation

CLC Number: 

No Suggested Reading articles found!