Chinese Journal of Network and Information Security ›› 2020, Vol. 6 ›› Issue (1): 38-45.doi: 10.11959/j.issn.2096-109x.2020012

• Papers • Previous Articles     Next Articles

Adversarial examples detection method based on boundary values invariants

Fei YAN,Minglun ZHANG,Liqiang ZHANG()   

  1. Key Laboratory of Aerospace Information Security and Trusted Computing,Ministry of Education,School of Cyber Science and Engineering,Wuhan University,Wuhan 430072,China
  • Revised:2020-02-02 Online:2020-02-15 Published:2020-03-23
  • Supported by:
    The National Basic Research Program of China (973 Program)(2014CB340601);The National Natural Science Foundation of China(61272452)

Abstract:

Nowadays,deep learning has become one of the most widely studied and applied technologies in the computer field.Deep neural networks(DNNs) have achieved greatly noticeable success in many applications such as image recognition,speech,self-driving and text translation.However,deep neural networks are vulnerable to adversarial examples that are generated by perturbing correctly classified inputs to cause DNN modes to misbehave.A boundary check method based on traditional programs by fitting the distribution to find the invariants in the deep neural network was proposed and it use the invariants to detect adversarial examples.The selection of training sets was irrelevant to adversarial examples.The experiment results show that proposed method can effectively detect the current adversarial example attacks on LeNet,vgg19 model,Mnist,Cifar10 dataset,and has a low false positive rate.

Key words: deep neuron network, boundary checking, invariant, adversarial examples detecting

CLC Number: 

No Suggested Reading articles found!