网络与信息安全学报 ›› 2022, Vol. 8 ›› Issue (3): 111-122.doi: 10.11959/j.issn.2096-109x.2022037

• 学术论文 • 上一篇    下一篇

面向图像识别的卷积神经网络鲁棒性研究进展

林点, 潘理, 易平   

  1. 上海交通大学网络空间安全学院,上海 200240
  • 修回日期:2021-11-02 出版日期:2022-06-15 发布日期:2022-06-01
  • 作者简介:林点(1996− ),男,福建福州人,上海交通大学硕士生,主要研究方向为人工智能安全
    潘理(1974− ),男,江苏苏州人,博士,上海交通大学教授、博士生导师,主要研究方向为网络大数据分析、云计算与大数据安全、网络安全管理
    易平(1969− ),男,河南洛阳人,博士,上海交通大学副研究员,主要研究方向为人工智能安全、无线网络安全、软件与系统安全
  • 基金资助:
    国家自然科学基金(62172278)

Research on the robustness of convolutional neural networks in image recognition

Dian LIN, Li PAN, Ping YI   

  1. School of Cyber Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
  • Revised:2021-11-02 Online:2022-06-15 Published:2022-06-01
  • Supported by:
    The National Natural Science Foundation of China(62172278)

摘要:

卷积神经网络是目前人工智能领域在图像识别与处理相关应用中的关键技术之一,广泛的应用使对其鲁棒性研究的重要性不断凸显。以往对于卷积神经网络鲁棒性的研究较为笼统,且多集中在对抗鲁棒性方面。这难以更深入地研究神经网络鲁棒性的发生机制,已经不适应人工智能的发展。引入神经科学的相关研究,提出了视觉鲁棒性的概念,通过研究神经网络模型与人类视觉系统的相似性,揭示了神经网络鲁棒性的内在缺陷。回顾了近年来神经网络鲁棒性的研究现状,并分析了神经网络模型缺乏鲁棒性的原因。神经网络缺乏鲁棒性体现在其对于微小扰动的敏感性,其原因在于神经网络会更倾向于学习人类难以感知的高频信息用于计算和推理。而这部分高频信息很容易被扰动所破坏,最终导致模型出现判断错误。传统鲁棒性的研究大多关注模型的数学性质,无法突破神经网络的天然局限性。视觉鲁棒性在传统鲁棒性的概念上进行拓展。传统鲁棒性概念衡量模型对于失真变形的图像样本的辨识能力,失真样本与原始干净样本在鲁棒模型上都能保持正确的输出。视觉鲁棒性衡量模型与人类判别能力的一致性。这需要将神经科学和心理学的研究方法、成果与人工智能相结合。回顾了神经科学在视觉领域的发展,讨论了认知心理学的研究方法在神经网络鲁棒性研究上的应用。人类视觉系统在学习和抽象能力上具有优势,神经网络模型在计算和记忆速度方面强于人类。人脑的生理结构与神经网络模型的逻辑结构的差异是导致神经网络鲁棒性问题的关键因素。视觉鲁棒性的研究需要对人类的视觉系统有更深刻的理解。揭示人类视觉系统与神经网络模型在认知机制上的差异,并对算法进行有效的改进,这是神经网络鲁棒性乃至人工智能算法的主要发展趋势。

关键词: 卷积神经网络, 图像识别, 鲁棒性, 对抗样本, 人类视觉

Abstract:

Convolutional neural network is one of the key technologies in the application of image recognition and processing in artificial intelligence.Its wide application makes researches on its robustness more and more important.Previous researches on robustness of neural networks were too sweeping and most of them focused on adversarial robustness, which causes difficulty in further study in the mechanism of neural network robustness.The related researches of neuroscience were introduced and the concept of visual robustness was put forward.By studying the similarity and difference between neural network models and human visual system, the internal mechanism and faults of neural network robustness were revealed.The researches of neural network robustness in recent years were reviewed, and the reasons for the lack of robustness of neural network models were analyzed.The lack of robustness of neural networks is reflected in their sensitivity to small perturbations.The reason is that neural networks tend to learn high-frequency information for calculation and inference, which is difficult for humans to recognize.High-frequency information is easily affected by perturbations, and eventually causes mistakes of models.Previous researches on robustness mostly focused on mathematical properties of models, and were limited in the natural faults of neural networks.Visual robustness extends the traditional concept of robustness.The traditional concept of robustness measures the discrimination ability of models for distorted image examples.Distorted examples and clean examples can get correct outputs through robust models.Visual robustness measures the consistency between models and humans in discrimination ability.Visual robustness combines the research methods and achievements of neuroscience and psychology with artificial intelligence.The development of neuroscience in the field of vision were reviewed, and the application of research methods of cognitive psychology in neural network robustness were discussed.Human visual system has advantages in learning and abstract ability, whill neural network models have better performance in calculation speed and memory.The difference between the physiological structure of human brain and the logical structure of neural network models is the key factor leading to the problem of robustness of neural networks.The research of visual robustness requires deeper understanding of human visual system.Revealing the differences in cognitive mechanism between human visual system and neural network models and effectively improving the algorithm are the development trends of neural network robustness and even artificial intelligence.

Key words: convolutional neural network, image recognition, robustness, adversarial example, human vision

中图分类号: 

No Suggested Reading articles found!