Chinese Journal of Network and Information Security ›› 2020, Vol. 6 ›› Issue (1): 38-45.doi: 10.11959/j.issn.2096-109x.2020012
• Papers • Previous Articles Next Articles
Fei YAN,Minglun ZHANG,Liqiang ZHANG()
Revised:
2020-02-02
Online:
2020-02-15
Published:
2020-03-23
Supported by:
CLC Number:
Fei YAN,Minglun ZHANG,Liqiang ZHANG. Adversarial examples detection method based on boundary values invariants[J]. Chinese Journal of Network and Information Security, 2020, 6(1): 38-45.
"
检测器 | 数据集 | 误报率 | FGSM | DeepFool | JSMA Next | JSMA LL | CW2 Next | CW2 LL |
本文BVI | MNIST | 0.0% | 98.6% | 98.8% | 85.4% | 85.2% | 98.6% | 98.6% |
MagNet | MNIST | 5.1% | 100.0% | 91.0% | 84.0% | 84.0% | 85.0% | 87.0% |
LID | MNIST | 4.4% | 97.0% | 92.0% | 96.0% | 96.0% | 92.0% | 91.0% |
FS | MNIST | 4.0% | 98.6% | — | 100.0% | 100.0% | 100.0% | 100.0% |
本文BVI | CIFAR-10 | 0.6% | 97.6% | 97.8% | 82.2% | 83.2% | 97.2% | 97.2% |
MagNet | CIFAR-10 | 6.4% | 100.0% | 87.0% | 72.0% | 74.0% | 89.0% | 91.0% |
LID | CIFAR-10 | 5.6% | 94.0% | 84.0% | 92.0% | 92.0% | 86.0% | 88.0% |
FS | CIFAR-10 | 4.9% | 21.0% | 77.0% | 84.0% | 89.0% | 100.0% | 100.0% |
[3] | DAHL G E , STOKES J W , DENG L ,et al. Large-scale malware classification using random projections and neural networks[C]// 2013 IEEE International Conference on Acoustics,Speech and Signal Processing. 2013: 3422-3426. |
[4] | MIRSKY Y , DOITSHMAN T , ELOVICI Y ,et al. Kitsune:an ensemble of autoencoders for online network intrusion detection[J]. arXiv preprint arXiv:1802.09089, 2018. |
[5] | SZEGEDY C , ZAREMBA W , SUTSKEVER I ,et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013. |
[6] | DHILLON G S , AZIZZADENESHELI K , LIPTON Z C ,et al. Stochastic activation pruning for robust adversarial defense[J]. arXiv preprint arXiv:1803.01442, 2018. |
[7] | GOODFELLOW I J , SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples[J]. arXiv:preprint arXiv:1412.6572, 2014. |
[8] | KURAKIN A , GOODFELLOW I , BENGIO S . Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016. |
[9] | CARLINI N , WAGNER D . Defensive distillation is not robust to adversarial examples[J]. arXiv preprint arXiv:1607.04311, 2016. |
[10] | PAPERNOT N , MCDANIEL P , JHA S ,et al. The limitations of deep learning in adversarial settings[C]// 2016 IEEE European Symposium on Security and Privacy (EuroS&P). 2016: 372-387. |
[11] | LIU Y , MA S , AAFER Y ,et al. Trojaning attack on neural networks[C]// Network and Distributed System Security Symposium. 2018. |
[12] | GU S , RIGAZIO L . Towards deep neural network architectures robust to adversarial examples[J]. arXiv preprint arXiv:1412.5068, 2014. |
[13] | FEINMAN R , CURTIN R R , SHINTRE S ,et al. Detecting adversarial samples from artifacts[J]. arXiv preprint arXiv:1703.00410, 2017. |
[14] | GROSSE K , MANOHARAN P , PAPERNOT N ,et al. On the (statistical) detection of adversarial examples[J]. arXiv preprint arXiv:1702.06280, 2016. |
[15] | MA X , LI B , WANG Y ,et al. Characterizing adversarial subspaces using local intrinsic dimensionality[J]. arXiv preprint arXiv:1801.02613, 2018. |
[16] | XU W , EVANS D , QI Y . Feature squeezing:detecting adversarial examples in deep neural networks[J]. arXiv preprint arXiv:1704.01155, 2017. |
[17] | MENG D , CHEN H . Magnet:a two-pronged defense against adversarial examples[C]// The 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 135-147. |
[18] | LIAO F , LIANG M , DONG Y ,et al. Defense against adversarial attacks using high-level representation guided denoiser[C]// The IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1778-1787. |
[1] | KRIZHEVSKY A , SUTSKEVER I , HINTON G E . Imagenet classification with deep convolutional neural networks[C]// Advances in Neural Information Processing Systems. 2012: 1097-1105. |
[2] | BOJARSKI M , Del TESTA D , DWORAKOWSKI D ,et al. End to end learning for self-driving cars[J]. arXiv preprint arXiv:1604.07316, 2016. |
[19] | LECUN Y , BOTTOU L , BENGIO Y ,et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11): 2278-2324. |
[20] | BROWN T B , MANé D , ROY A ,et al. Adversarial patch[J]. arXiv preprint arXiv:1712.09665, 2017. |
[21] | EYKHOLT K , EVTIMOV I , FERNANDES E ,et al. Robust physical-world attacks on deep learning models[J]. arXiv preprint arXiv:1707.08945, 2017. |
[22] | PEI K , CAO Y , YANG J ,et al. Deepxplore:automated whitebox testing of deep learning systems[C]// The 26th Symposium on Operating Systems Principles. 2017: 1-18. |
[23] | BIGGIO B , ROLI F . Wild patterns:ten years after the rise of adversarial machine learning[J]. Pattern Recognition, 2018,84: 317-331. |
[24] | MOOSAVI-DEZFOOLI S M , FAWZI A , FROSSARD P . DeepFool:a simple and accurate method to fool deep neural networks[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2574-2582. |
[25] | CARLINI N , WAGNER D . Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy (SP). 2017: 39-57. |
[26] | KINGMA D P , BA J . Adam:a method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014. |
[27] | ROUHANI B D , SAMRAGH M , JAVAHERIPIM ,et al. Deepfense:online accelerated defense against adversarial deep learning[C]// IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 2018: 1-8. |
[28] | SONG Y , KIM T , NOWOZIN S ,et al. Pixeldefend:leveraging generative models to understand and defend against adversarial examples[J]. arXiv preprint arXiv:1710.10766, 2017. |
[29] | XIE C , WANG J , ZHANG Z ,et al. Mitigating adversarial effects through randomization[J]. arXiv preprint arXiv:1711.01991, 2017. |
[30] | PAPERNOT N , MCDANIEL P , SINHA A ,et al. Towards the science of security and privacy in machine learning[J].,2016. arXiv preprint arXiv:1611.03814, 2016. |
[31] | PAPERNOT N , MCDANIEL P , WU X ,et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// 2016 IEEE Symposium on Security and Privacy (SP). 2016: 582-597. |
[32] | PAPERNOT N , MCDANIEL P , GOODFELLOW I ,et al. Practical black-box attacks against machine learning[C]// ACM on Asia Conference on Computer and Communications Security. 2017: 506-519. |
[33] | ATHALYE A , CARLINI N , WAGNER D . Obfuscated gradients give a false sense of security:circumventing defenses to adversarial examples[J]. arXiv preprint arXiv:1802.00420, 2018. |
[34] | BHAGOJI A N , CULLINA D , MITTAL P . Dimensionality reduction as a defense against evasion attacks on machine learning classifiers[J]. arXiv preprint arXiv:1704.02654, 2017. |
[35] | GONG Z , WANG W , KU W S . Adversarial and clean data are not twins[J]. arXiv preprint arXiv:1704.04960, 2017. |
[36] | HENDRYCKS D , GIMPEL K . Early methods for detecting adversarial images[J]. arXiv preprint arXiv:1608.00530, 2016. |
[37] | TAX D M J , DUIN R P W . Support vector domain description[J]. Pattern Recognition Letters, 1999,20(11-13): 1191-1199. |
[38] | CARLINI N , WAGNER D . Adversarial examples are not easily detected:bypassing ten detection methods[C]// The 10th ACM Workshop on Artificial Intelligence and Security. 2017: 3-14. |
[39] | LU P H , CHEN P Y , YU C M . On the limitation of local intrinsic dimensionality for characterizing the subspaces of adversarial examples[J]. arXiv preprint arXiv:1803.09638, 2018. |
[40] | GUO C , RANA M , CISSE M ,et al. Countering adversarial images using input transformations[J]. arXiv preprint arXiv:1711.00117, 2017. |
[41] | TAO G , MA S , LIU Y ,et al. Attacks meet interpretability:attribute-steered detection of adversarial samples[C]// Advances in Neural Information Processing Systems. 2018: 7717-7728. |
[42] | GILMER J , METZ L , FAGHRI F ,et al. Adversarial spheres[J]. arXiv preprint arXiv:1801.02774, 2018. |
[43] | PERERA P , PATEL V M . Learning deep features for one-class classification[J]. IEEE Transactions on Image Processing, 2019,28(11): 5450-5463. |
[44] | TAX D M J , DUIN R P W . Data domain description using support vectors[C]// ESANN. 1999,99: 251-256. |
[45] | KRIZHEVSKY A , HINTON G . Learning multiple layers of features from tiny images[R]. Technical Report,University of Toronto, 2009. |
[46] | RAUBER J , BRENDEL W , BETHGE M . Foolbox:a Python toolbox to benchmark the robustness of machine learning models[J]. arXiv preprint arXiv:1707.04131, 2017. |
[47] | SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014. |
[1] | Guanyun FENG, Cai FU, Jianqiang LYU, Lansheng HAN. Insider threat detection based on operational attention and data augmentation [J]. Chinese Journal of Network and Information Security, 2023, 9(3): 102-112. |
[2] | Xue BAI, Baodong QIN, Rui GUO, Dong ZHENG. Two-party cooperative blind signature based on SM2 [J]. Chinese Journal of Network and Information Security, 2022, 8(6): 39-51. |
[3] | Jiaying LIN, Wenbo ZHOU, Weiming ZHANG, Nenghai YU. Lip forgery detection via spatial-frequency domain combination [J]. Chinese Journal of Network and Information Security, 2022, 8(6): 146-155. |
[4] | Hao CHEN, Feng WANG, Weiming ZHANG, Nenghai YU. Carrier-independent deep optical watermarking algorithm [J]. Chinese Journal of Network and Information Security, 2022, 8(4): 110-118. |
[5] | Jinghan WANG, Hui ZHU, Helin LI, Hui LI, Xiaopeng YANG. Reversible data hiding scheme based on enhanced image smoothness [J]. Chinese Journal of Network and Information Security, 2022, 8(3): 66-75. |
[6] | Jinwei LI, Xiaoya ZHANG, Yuanzhi YAO, Nenghai YU. Reversible data hiding in encrypted images based on fine-grained embedding room reservation [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 106-117. |
[7] | Jiashun ZHOU, Na WANG, Xuehui DU. Multi-party efficient audit mechanism for data integrity based on blockchain [J]. Chinese Journal of Network and Information Security, 2021, 7(6): 113-125. |
[8] | Chuanxin ZHOU, Yi SUN, Degang WANG, Huawei GE. Survey of federated learning research [J]. Chinese Journal of Network and Information Security, 2021, 7(5): 77-92. |
[9] | Yongcheng SONG, Xinyi HUANG, Wei WU, Haixia CHEN. Survey of code-based digital signatures [J]. Chinese Journal of Network and Information Security, 2021, 7(4): 1-17. |
[10] | Jinghai LI, Ming TANG, Chengxuan HUANG. Using side-channel and quantization vulnerability to recover DNN weights [J]. Chinese Journal of Network and Information Security, 2021, 7(4): 53-67. |
[11] | Qingyin TAN, Yingming ZENG, Ye HAN, Yijing LIU, Zheli LIU. Survey on backdoor attacks targeted on neural network [J]. Chinese Journal of Network and Information Security, 2021, 7(3): 46-58. |
[12] | Fan CHAO, Zhi YANG, Xuehui DU, Bing HAN. Classified risk assessment method of Android application based on multi-factor clustering selection [J]. Chinese Journal of Network and Information Security, 2021, 7(2): 161-173. |
[13] | Qi CAO, Shuhua RUAN, Xingshu CHEN, Xiao LAN, Hongxia ZHANG, Hongjian JIN. Embedding of national cryptographic algorithm in Hyperledger Fabric [J]. Chinese Journal of Network and Information Security, 2021, 7(1): 65-75. |
[14] | Mingfeng ZHAO, Chen LEI, Yang ZHONG, Jinbo XIONG. Dynamic privacy measurement model and evaluation system for mobile edge crowdsensing [J]. Chinese Journal of Network and Information Security, 2021, 7(1): 157-166. |
[15] | Yu ZHANG,Xixiang LYU,Yucong ZOU,Yige LI. Differentially private sequence generative adversarial networks for data privacy masking [J]. Chinese Journal of Network and Information Security, 2020, 6(4): 109-119. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
|