Chinese Journal of Network and Information Security ›› 2022, Vol. 8 ›› Issue (3): 111-122.doi: 10.11959/j.issn.2096-109x.2022037
• Papers • Previous Articles Next Articles
Dian LIN, Li PAN, Ping YI
Revised:
2021-11-02
Online:
2022-06-15
Published:
2022-06-01
Supported by:
CLC Number:
Dian LIN, Li PAN, Ping YI. Research on the robustness of convolutional neural networks in image recognition[J]. Chinese Journal of Network and Information Security, 2022, 8(3): 111-122.
"
方法名称 | 优点 | 缺点 |
消除对抗扰动[ | 与模型结构无关,可以作为任何模型的输入层扩展模块,时间开销小 | 难以控制消除扰动的程度,程度过高会使图像丢失信息,过低则防御能力不足 |
对抗样本检测[ | 与模型结构无关,可以作为任何模型的输入层扩展模块,时间开销小 | 存在误报率和漏报率的冲突;对于较强的攻击算法效果较差[ |
对抗训练[ | 目前最为有效的对抗鲁棒性提升方法,理论上可以应对所有已知攻击 | 训练成本高,可能使模型对于对抗样本过拟合,要达到理想的鲁棒性需要更深的网络结构[ |
生物启发模型[ | 与人类视觉相适应的结构能够有效提高鲁棒性 | 模型往往较为复杂,难以推广;缺乏更深入的有效性研究;需要脑科学领域的进一步验证 |
[1] | LECUN Y , BOTTOU L , BENGIO Y ,et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11): 2278-2324. |
[2] | KRIZHEVSKY A , SUTSKEVER I , HINTON G . ImageNet classification with deep convolutional neural networks[J]// Communications of the ACM, 2017,60: 84-90. |
[3] | SZEGEDY C , ZAREMBA W , SUTSKEVER I ,et al. Intriguing properties of neural networks[C]// 2nd International Conference on Learning Representations. ICLR 2014. |
[4] | GOODFELLOW I J , SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples[C]// 3rd International Conference on Learning Representations. ICLR, 2015. |
[5] | CARLINI N , WAGNER D . Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017: 39-57. |
[6] | MADRY A , MAKELOV A , SCHMIDT L ,et al. Towards deep learning models resistant to adversarial attacks[C]// International Conference on Learning Representa-tions. 2018. |
[7] | PAPERNOT N , MCDANIEL P , GOODFELLOW I ,et al. Practical black-box attacks against machine learning[C]// Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 2017: 506-519. |
[8] | LIU Y , CHEN X , LIU C ,et al. Delving into transferable adversarial examples and black-box attacks[C]// International Conference on Learning Representations. 2017. |
[9] | BRENDEL W , RAUBER J , BETHGE M . Decision-based adversarial attacks:reliable attacks against black-box machine learning models[C]// International Conference on Learning Representations. 2018. |
[10] | DZIUGAITE G K , GHAHRAMANI Z , ROY D M . A study of the effect of jpg compression on adversarial images[J]. arXiv preprint arXiv:1608.00853, 2016. |
[11] | VINCENT P , LAROCHELLE H , BENGIO Y ,et al. Extracting and composing robust features with denoising autoencod-ers[C]// Proceedings of the 25th International Conference on Machine Learning. 2008: 1096-1103. |
[12] | XU W , EVANS D , QI Y . Feature squeezing:Detecting adversarial examples in deep neural networks[J]. arXiv preprint arXiv:1704.01155, 2017. |
[13] | GROSSE K , MANOHARAN P , PAPERNOT N ,et al. On the (statistical) detection of adversarial examples[J]. arXiv pre-print arXiv:1702.06280, 2017. |
[14] | CARLINI N , WAGNER D . Adversarial examples are not easily detected:Bypassing ten detection methods[C]// Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017: 3-14. |
[15] | SHORTEN C , KHOSHGOFTAAR T M . A survey on image data augmentation for deep learning[J]. Journal of Big Data, 2019,6(1): 1-48. |
[16] | TSIPRAS D , SANTURKAR S , ENGSTROM L ,et al. Robustness May Be at Odds with Accuracy[C]// International Conference on Learning Representations. 2019. |
[17] | XIE C , TAN M , GONG B ,et al. Adversarial examples improve image recognition[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 819-828. |
[18] | REDDY M V , BANBURSKI A , PANT N ,et al. Biologically Inspired Mechanisms for Adversarial Robustness[J]. Advances in Neural Information Processing Systems, 2020,33. |
[19] | KIM E , REGO J , WATKINS Y ,et al. Modeling biological immunity to adversarial examples[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 4666-4675. |
[20] | FAWZI A , FROSSARD P . Manitest:Are classifiers really invariant?[C]// British Machine Vision Conference (BMVC). 2015: 106.1-106.13. |
[21] | LENC K , VEDALDI A . Understanding image representations by measuring their equivariance and equivalence[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 991-999. |
[22] | HINTON G E , KRIZHEVSKY A , WANG S D . Transforming auto-encoders[C]// International Conference on Artificial Neural Networks. Springer,Berlin,Heidelberg, 2011: 44-51. |
[23] | PATRICK M K , ADEKOYA A F , MIGHTY A A ,et al. Capsule networks–a survey[J]. Journal of King Saud University-Computer and Information Sciences, 2022,34(1): 1295-1310. |
[24] | PHONG N H , RIBEIRO B . Advanced capsule networks via context awareness[J]. Lecture Notes in Computer Science. 2019: 166-177. |
[25] | RAJASEGARAN J , JAYASUNDARA V , JAYASEKARA S ,et al. Deepcaps:Going deeper with capsule networks[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 10725-10733. |
[26] | HENDRYCKS D , DIETTERICH T . Benchmarking Neural Network Robustness to Common Corruptions and Perturbations[C]// International Conference on Learning Representations. 2018. |
[27] | PAPERNOT N , MCDANIEL P , WU X ,et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// 2016 IEEE Symposium on Security and Privacy (SP). IEEE, 2016: 582-597. |
[28] | ATHALYE A , CARLINI N , WAGNER D . Obfuscated gradients give a false sense of security:Circumventing defenses to adversarial examples[C]// International Conference on Machine Learning. PMLR, 2018: 274-283. |
[29] | MCDANIEL P , PAPERNOT N , CELIK Z B . Machine learning in adversarial settings[J]. IEEE Security & Privacy, 2016,14(3): 68-72. |
[30] | STUTZ D , HEIN M , SCHIELE B . Disentangling adversarial robustness and generalization[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 6976-6987. |
[31] | ILYAS A , SANTURKAR S , ENGSTROM L ,et al. Adversarial examples are not bugs,they are features[J]. Advances in Neural Information Processing Systems, 2019,32. |
[32] | NGUYEN A , YOSINSKI J , CLUNE J . Deep neural networks are easily fooled:High confidence predictions for unrecognizable images[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015: 427-436. |
[33] | TRAMER F , CARLINI N , BRENDEL W ,et al. On adaptive attacks to adversarial example defenses[J]. Advances in Neural Information Processing Systems, 2020,33: 1633-1645. |
[34] | GEIRHOS R , MEDINA TEMME C R , RAUBER J ,et al. Generalisation in humans and deep neural networks[C]// Thirty-second Annual Conference on Neural Information Processing Systems 2018 (NeurIPS 2018). Curran, 2019: 7549-7561. |
[35] | ZHENG S , SONG Y , LEUNG T ,et al. Improving the robustness of deep neural networks via stability training[C]// Proceedings of the Ieee Conference on Computer Vision and Pattern Recognition. 2016: 4480-4488. |
[36] | HOSSEINI H , POOVENDRAN R . Semantic adversarial examples[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 1614-1619. |
[37] | HUBEL D H , WIESEL T N . Receptive fields,binocular interaction and functional architecture in the cat's visual cortex[J]. The Journal of physiology, 1962,160(1): 106-154. |
[38] | GOODALE M A , MILNER A D . Separate visual pathways for perception and action[J]. Trends in Neurosciences, 1992,15(1): 20-25. |
[39] | KATZNER S , WEIGELT S . Visual cortical networks:of mice and men[J]. Current Opinion in Neurobiology, 2013,23(2): 202-206. |
[40] | RAJALINGHAM R , DICARLO J J . Reversible inactivation of different millimeter-scale regions of primate IT results in different patterns of core object recognition deficits[J]. Neuron, 2019,102(2): 493-505.e5. |
[41] | KANWISHER N . Functional specificity in the human brain:a window into the functional architecture of the mind[J]. Proceedings of the National Academy of Sciences, 2010,107(25): 11163-11170. |
[42] | KONKLE T , OLIVA A . A real-world size organization of object responses in occipitotemporal cortex[J]. Neuron, 2012,74(6): 1114-1124. |
[43] | KRIEGESKORTE N , MUR M , RUFF D A ,et al. Matching categorical object representations in inferior temporal cortex of man and monkey[J]. Neuron, 2008,60(6): 1126-1141. |
[44] | PROKLOVA D , KAISER D , PEELEN M V . Disentangling representations of object shape and object category in human visual cortex:The animate–inanimate distinction[J]. Journal of Cognitive Neuroscience, 2016,28(5): 680-692. |
[45] | FISHER R A . The use of multiple measurements in taxonomic problems[J]. Annals of Eugenics, 1936,7(2): 179-188. |
[46] | MCCULLOCH W S , PITTS W . A logical calculus of the ideas immanent in nervous activity[J]. The Bulletin of Mathematical Biophysics, 1943,5(4): 115-133. |
[47] | EICKENBERG M , GRAMFORT A , VAROQUAUX G ,et al. Seeing it all:Convolutional network layers map the function of the human visual system[J]. Neuro Image, 2017,152: 184-194. |
[48] | HORIKAWA T , KAMITANI Y . Generic decoding of seen and imagined objects using hierarchical visual features[J]. Nature Communications, 2017,8(1): 1-15. |
[49] | ST-YVES G , NASELARIS T . The feature-weighted receptive field:an interpretable encoding model for complex feature spaces[J]. Neuro Image, 2018,180: 188-202. |
[50] | WEN H , SHI J , ZHANG Y ,et al. Neural encoding and decoding with deep learning for dynamic natural vision[J]. Cerebral Cortex, 2018,28(12): 4136-4160. |
[51] | CADIEU C F , HONG H , YAMINS D L K ,et al. Deep neural networks rival the representation of primate IT cortex for core visual object recognition[J]. PLoS Comput Biol, 2014,10(12): e1003963. |
[52] | BASHIVAN P , KAR K , DICARLO J J . Neural population control via deep image synthesis[J]. Science, 2019,364(6439). |
[53] | ULLMAN S , ASSIF L , FETAYA E ,et al. Atoms of recognition in human and computer vision[J]. Proceedings of the National Academy of Sciences, 2016,113(10): 2744-2749. |
[54] | ELSAYED G , SHANKAR S , CHEUNG B ,et al. Adversarial examples that fool both computer vision and time-limited humans[C]// Advances in Neural Information Processing Systems. 2018: 3910-3920. |
[55] | ZHOU Z , FIRESTONE C . Humans can decipher adversarial images[J]. Nature Communications, 2019,10(1). |
[56] | SANTURKAR S , TSIPRAS D , TRAN B ,et al. Image synthesis with a single (robust) classifier[J]. Advances in Neural Information Processing Systems, 2019,32. |
[57] | RITTER S , BARRETT D G T , SANTORO A ,et al. Cognitive psychology for deep neural networks:A shape bias case study[C]// International Conference on Machine Learning. PMLR, 2017: 2940-2949. |
[58] | HOSSEINI H , XIAO B , JAISWAL M ,et al. Assessing shape bias property of convolutional neural networks[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018: 1923-1931. |
[59] | GEIRHOS R , RUBISCH P , MICHAELIS C ,et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness[C]// International Conference on Learning Representations, ICLR 2019. |
[1] | Mingying ZHANG, Bing HUA, Yuguang ZHANG, Haidong LI, Mohong ZHENG. Robust reinforcement learning algorithm based on pigeon-inspired optimization [J]. Chinese Journal of Network and Information Security, 2022, 8(5): 66-74. |
[2] | Baolin QIU, Ping YI. Adversarial examples defense method based on multi-dimensional feature maps knowledge distillation [J]. Chinese Journal of Network and Information Security, 2022, 8(2): 88-99. |
[3] | Haoran SHI, Lixin JI, Shuxin LIU, Gengrun WANG. Abnormal link detection algorithm based on semi-local structure [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 63-72. |
[4] | Zhongyuan QIN, Zhaoxiang HE, Tao LI, Liquan CHEN. Adversarial example defense algorithm for MNIST based on image reconstruction [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 86-94. |
[5] | Pengcheng WANG, Haibin ZHENG, Jianfei ZOU, Ling PANG, Hu LI, Jinyin CHEN. Robustness evaluation of commercial liveness detection platform [J]. Chinese Journal of Network and Information Security, 2022, 8(1): 180-189. |
[6] | Yu ZHANG, Hailiang LI. RSA-based image recognizable adversarial attack method [J]. Chinese Journal of Network and Information Security, 2021, 7(5): 40-48. |
[7] | Zhenglong WANG, Baowen ZHANG. Survey of generative adversarial network [J]. Chinese Journal of Network and Information Security, 2021, 7(4): 68-85. |
[8] | Jinyin CHEN, Dunjie ZHANG, Guohan HUANG, Xiang LIN, Liang BAO. Adversarial attack and defense on graph neural networks: a survey [J]. Chinese Journal of Network and Information Security, 2021, 7(3): 1-28. |
[9] | Peijie LI, Li ZHANG, Yunfei XIA, Liming XU. Architecture design of re-configurable convolutional neural network on software definition [J]. Chinese Journal of Network and Information Security, 2021, 7(3): 29-36. |
[10] | Bin WANG, Liang CHEN, Yaguan QIAN, Yankai GUO, Qiqi SHAO, Jiamin WANG. Moving target defense against adversarial attacks [J]. Chinese Journal of Network and Information Security, 2021, 7(1): 113-120. |
[11] | Xin ZHANG,Weizhong QIANG,Yueming WU,Deqing ZOU,Hai JIN. Mining behavior pattern of mobile malware with convolutional neural network [J]. Chinese Journal of Network and Information Security, 2020, 6(6): 35-44. |
[12] | Qi WU,Hongchang CHEN. Low failure recovery cost controller placement strategy in software defined networks [J]. Chinese Journal of Network and Information Security, 2020, 6(6): 97-104. |
[13] | Ximeng LIU,Lehui XIE,Yaopeng WANG,Xuru LI. Adversarial attacks and defenses in deep learning [J]. Chinese Journal of Network and Information Security, 2020, 6(5): 36-53. |
[14] | Kang HE,Yuefei ZHU,Long LIU,Bin LU,Bin LIU. Improve the robustness of algorithm under adversarial environment by moving target defense [J]. Chinese Journal of Network and Information Security, 2020, 6(4): 67-76. |
[15] | Guanghan DUAN,Chunguang MA,Lei SONG,Peng WU. Research on structure and defense of adversarial example in deep learning [J]. Chinese Journal of Network and Information Security, 2020, 6(2): 1-11. |
Viewed | ||||||
Full text |
|
|||||
Abstract |
|
|||||
|