Chinese Journal of Network and Information Security ›› 2020, Vol. 6 ›› Issue (2): 1-11.doi: 10.11959/j.issn.2096-109x.2020016
• Comprehensive Reviews • Next Articles
Guanghan DUAN1,Chunguang MA2(),Lei SONG1,Peng WU2
Revised:
2019-08-20
Online:
2020-04-15
Published:
2020-04-23
Supported by:
CLC Number:
Guanghan DUAN,Chunguang MA,Lei SONG,Peng WU. Research on structure and defense of adversarial example in deep learning[J]. Chinese Journal of Network and Information Security, 2020, 6(2): 1-11.
[1] | SZEGEDY C , VANHOUCKE V , IOFFE S ,et al. Rethinking the inception architecture for computer vision[C]// The IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2818-2826. |
[2] | TANG T A , MHAMDI L , MCLERNON D ,et al. Deep learning approach for network intrusion detection in software defined networking[C]// 2016 International Conference on Wireless Networks and Mobile Communications (WINCOM). 2016: 258-263. |
[3] | COLLOBERT R , WESTON J . A unified architecture for natural language processing:deep neural networks with multitask learning[C]// The 25th International Conference on Machine Learning. 2008: 160-167. |
[4] | CHEN C , SEFF A , KORNHAUSER A ,et al. Deepdriving:learning affordance for direct perception in autonomous driving[C]// The IEEE International Conference on Computer Vision. 2015: 2722-2730. |
[5] | CHING T , HIMMELSTEIN D S , BEAULIEU-JONES B K ,et al. Opportunities and obstacles for deep learning in biology and medicine[J]. Journal of The Royal Society Interface, 2018,15(141). |
[6] | SZEGEDY C , ZAREMBA W , SUTSKEVER I ,et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013 |
[7] | KURAKIN A , GOODFELLOW I , BENGIO S . Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016 |
[8] | ALZANTOT M , SHARMA Y , ELGOHARY A ,et al. Generating natural language adversarial examples[J]. arXiv preprint arXiv:1804.07998, 2018 |
[9] | QIN Y , CARLINI N , GOODFELLOW I ,et al. Imperceptible,robust,and targeted adversarial examples for automatic speech recognition[J]. arXiv preprint arXiv:1903.10346, 2019 |
[10] | LECUN Y , BENGIO Y , HINTON G . Deep learning[J]. Nature, 2015,521(7553):436. |
[11] | PAPERNOT N , MCDANIEL P , GOODFELLOW I . Transferability in machine learning:from phenomena to black-box attacks using adversarial samples[J]. arXiv preprint arXiv:1605.07277, 2016 |
[12] | PAPERNOT N , MCDANIEL P , JHA S ,et al. The limitations of deep learning in adversarial settings[C]// The 1st IEEE European Symposium on Security and Privacy. 2016. |
[13] | 宋蕾, 马春光, 段广晗 . 机器学习安全及隐私保护研究进展[J]. 网络与信息安全学报, 2018,4(8): 1-11. |
SONG L , MA C G , DUAN G H . Machine learning security and privacy:a survey[J]. Chinese Journal of Network and Information Security, 2018,4(8): 1-11. | |
[14] | GU S , RIGAZIO L . Towards deep neural network architectures robust to adversarial examples[J]. arXiv preprint arXiv:1412.5068, 2014 |
[15] | GOODFELLOW I J , SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples[C]// 2015 International Conference on Learning Representations. 2015: 1-10. |
[16] | TABACOF P , VALLE E . Exploring the space of adversarial images[J]. arXiv preprint arXiv:1510.05328, 2015 |
[17] | TRAM`ER F , PAPERNOT N , GOODFELLOW I ,et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017 |
[18] | KROTOV D , HOPFIELD J J . Dense associative memory is robust to adversarial inputs[J]. arXiv preprint arXiv:1701.00939, 2017 |
[19] | LUO Y , BOIX X , ROIG G ,et al. Foveation-based mechanisms alleviate adversarial examples[J]. arXiv preprint arXiv:1511.06292, 2015 |
[20] | TANAY T , GRIFFIN L . A boundary tilting perspective on the phenomenon of adversarial examples[J]. arXiv preprint arXiv:1608.07690, 2016 |
[21] | MOOSAVI-DEZFOOLI S M , FAWZI A , FAWZI O ,et al. Universal adversarial perturbations[C]// The IEEE Conference on Computer Vision and Pattern Recognition. 2017: 1765-1773. |
[22] | MOOSAVI-DEZFOOLI S M , FAWZI A , FAWZI O ,et al. Analysis of universal adversarial perturbations[J]. arXiv preprint arXiv:1705.09554, 2017 |
[23] | TRAM`ER F , KURAKIN A , PAPERNOT N ,et al. Ensemble adversarial training:attacks and defenses[J]. arXiv preprint arXiv:1705.07204, 2017 |
[24] | MOOSAVI-DEZFOOLI S M , FAWZI A , FAWZI O ,et al. Robustness of classifiers to universal perturbations:a geometric perspective[C]// International Conference on Learning Representations. 2018. |
[25] | SONG Y , KIM T , NOWOZIN S ,et al. Pixeldefend:leveraging generative models to understand and defend against adversarial examples[J]. arXiv preprint arXiv:1710.10766, 2017 |
[26] | MENG D , CHEN H . Magnet:a two-pronged defense against adversarial examples[C]// The 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 135-147. |
[27] | GHOSH P , LOSALKA A , BLACK M J . Resisting adversarial attacks using gaussian mixture variational autoencoders[J]. arXiv preprint arXiv:1806.00081, 2018 |
[28] | LEE H , HAN S , LEE J . Generative adversarial trainer:defense to adversarial perturbations with gan[J]. arXiv preprint arXiv:1705.03387, 2017 |
[29] | GILMER J , METZ L , FAGHRI F ,et al. Adversarial spheres[J]. arXiv preprint arXiv:1801.02774, 2018 |
[30] | GILMER J , METZ L , FAGHRI F ,et al. The relationship between high-dimensional geometry and adversarial examples[J]. arXiv:1801.02774v3, 2018 |
[31] | EYKHOLT K , EVTIMOV I , FERNANDES E ,et al. Robust physical-world attacks on deep learning visual classification[C]// The IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1625-1634. |
[32] | MOOSAVI-DEZFOOLI S M , FAWZI A , FROSSARD P . Deepfool:a simple and accurate method to fool deep neural networks[C]// The 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016. |
[33] | PAPERNOT N , MCDANIEL P , SWAMI A ,et al. Crafting adversarial input sequences for recurrent neural networks[C]// MILCOM 2016-2016 IEEE Military Communications Conference. 2016: 49-54. |
[34] | GROSSE K , PAPERNOT N , MANOHARAN P ,et al. Adversarial examples for malware detection[C]// European Symposium on Research in Computer Security. 2017: 62-79. |
[35] | RUSSAKOVSKY O , DENG J , SU H ,et al. ImageNet large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015,115(3): 211-252. |
[36] | PAPERNOT N , MCDANIEL P , JHA S ,et al. The limitations of deep learning in adversarial settings[C]// 2016 IEEE European Symposium on Security and Privacy. 2016: 372-387. |
[37] | PAPERNOT N , MCDANIEL P , GOODFELLOW I ,et al. Practical black-box attacks against machine learning[C]// The 2017 ACM on Asia Conference on Computer and Communications Security. 2017: 506-519. |
[38] | ILYAS A , ENGSTROM L , ATHALYE A ,et al. Black-box adversarial attacks with limited queries and information[J]. arXiv preprint arXiv:1804.08598, 2018 |
[39] | BALUJA S , FISCHER I . Adversarial transformation networks:Learning to generate adversarial examples[J]. arXiv preprint arXiv:1703.09387, 2017 |
[40] | XIAO C , LI B , ZHU J Y ,et al. Generating adversarial examples with adversarial networks[C]// The 27th International Joint on Artificial Intelligence Main track. 2019: 3805-3911. |
[41] | ZHAO P , FU Z , HU Q ,et al. Detecting adversarial examples via key-based network[J]. arXiv preprint arXiv:1806.00580, 2018 |
[42] | MENG D , CHEN H . Magnet:a two-pronged defense against adversarial examples[C]// The 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 135-147. |
[43] | XU W , EVANS D , QI Y . Feature squeezing:detecting adversarial examples in deep neural networks[J]. arXiv preprint arXiv:1704.01155, 2017 |
[44] | HOSSEIN H , CHEN Y , KANNAN S ,et al. Blocking transferability of adversarial examples in black-box learning systems[J]. arXiv:1703.04318, 2017 |
[45] | SABOUR S , NICHOLAS F , HINTON G E . Dynamic routing between capsules[C]// Neural Information Processing Systems. 2017. |
[46] | NICHOLAS F , SABOUR S , HINTON G . DARCCC:detecting adversaries by reconstruction from class conditional capsules[J]. arXiv preprint arXiv:1811.06969, 2018 |
[47] | TRAMèR F , KURAKIN A , PAPERNOT N ,et al. Ensemble adversarial raining:attacks and defenses[J]. arXiv:1705.07204, 2017 |
[48] | SINHA A , CHEN Z , BADRINARAYANAN V ,et al. Gradient adversarial training of neural networks[J]. arXiv preprint arXiv:1806.08028, 2018 |
[49] | KURAKIN A , GOODFELLOW I , BENGIO S . Adversarial machine learning at scale[J]. arXiv preprint arXiv:1611.01236, 2016 |
[50] | PAPERNOT N , MCDANIEL P , WU X ,et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// 2016 IEEE Symposium on Security and Privacy. 2016: 582-597. |
[51] | HINTON G E , VINYALS O , DEAN J . Distilling the knowledge in a neural network[J]. arXiv:1503.02531, |
[52] | LEE H , HAN S , LEE J . Generative adversarial trainer:defense to adversarial perturbations with GAN[J]. arXiv preprint arXiv:1705.03387, 2017 |
[1] | Xiaomeng LI, Daidou GUO, Xunfang ZHUO, Heng YAO, Chuan QIN. Carrier-independent screen-shooting resistant watermarking based on information overlay superimposition [J]. Chinese Journal of Network and Information Security, 2023, 9(3): 135-149. |
[2] | Rongna XIE, Zhuhong MA, Zongyu LI, Ye TIAN. Encrypted traffic classification method based on convolutional neural network [J]. Chinese Journal of Network and Information Security, 2022, 8(6): 84-91. |
[3] | Dengyong ZHANG, Huang WEN, Feng LI, Peng CAO, Lingyun XIANG, Gaobo YANG, Xiangling DING. Image inpainting forensics method based on dual branch network [J]. Chinese Journal of Network and Information Security, 2022, 8(6): 110-122. |
[4] | Jiaying LIN, Wenbo ZHOU, Weiming ZHANG, Nenghai YU. Lip forgery detection via spatial-frequency domain combination [J]. Chinese Journal of Network and Information Security, 2022, 8(6): 146-155. |
[5] | Dian LIN, Li PAN, Ping YI. Research on the robustness of convolutional neural networks in image recognition [J]. Chinese Journal of Network and Information Security, 2022, 8(3): 111-122. |
[6] | Jinyin CHEN, Changan WU, Haibin ZHENG. Novel defense based on softmax activation transformation [J]. Chinese Journal of Network and Information Security, 2022, 8(2): 48-63. |
[7] | Baolin QIU, Ping YI. Adversarial examples defense method based on multi-dimensional feature maps knowledge distillation [J]. Chinese Journal of Network and Information Security, 2022, 8(2): 88-99. |
[8] | Lijuan LI, Man LI, Hongjun BI, Huachun ZHOU. Multi-type low-rate DDoS attack detection method based on hybrid deep learning [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 73-85. |
[9] | Zhongyuan QIN, Zhaoxiang HE, Tao LI, Liquan CHEN. Adversarial example defense algorithm for MNIST based on image reconstruction [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 86-94. |
[10] | Deqing ZOU, Xiang LI, Minhuan HUANG, Xiang SONG, Hao LI, Weiming LI. Intelligent vulnerability detection system based on graph structured source code slice [J]. Chinese Journal of Network and Information Security, 2021, 7(5): 113-122. |
[11] | Zhenglong WANG, Baowen ZHANG. Survey of generative adversarial network [J]. Chinese Journal of Network and Information Security, 2021, 7(4): 68-85. |
[12] | Binglong LI, Jinlong TONG, Yu ZHANG, Yifeng SUN, Qingxian WANG, Chaowen CHANG. Auto forensic detecting algorithms of malicious code fragment based on TensorFlow [J]. Chinese Journal of Network and Information Security, 2021, 7(4): 154-163. |
[13] | Qingyin TAN, Yingming ZENG, Ye HAN, Yijing LIU, Zheli LIU. Survey on backdoor attacks targeted on neural network [J]. Chinese Journal of Network and Information Security, 2021, 7(3): 46-58. |
[14] | Fan CHAO, Zhi YANG, Xuehui DU, Bing HAN. Classified risk assessment method of Android application based on multi-factor clustering selection [J]. Chinese Journal of Network and Information Security, 2021, 7(2): 161-173. |
[15] | Bin WANG, Liang CHEN, Yaguan QIAN, Yankai GUO, Qiqi SHAO, Jiamin WANG. Moving target defense against adversarial attacks [J]. Chinese Journal of Network and Information Security, 2021, 7(1): 113-120. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
|