网络与信息安全学报 ›› 2023, Vol. 9 ›› Issue (3): 16-27.doi: 10.11959/j.issn.2096-109x.2023034

• 学术论文 • 上一篇    下一篇

针对车牌识别系统的双重对抗攻击

陈先意1, 顾军1, 颜凯1, 江栋1, 许林峰1, 付章杰1,2   

  1. 1 南京信息工程大学数字取证教育部工程研究中心,江苏 南京 210044
    2 综合业务网理论及关键技术国家重点实验室(西安电子科技大学),陕西 西安 710126
  • 修回日期:2023-04-18 出版日期:2023-06-25 发布日期:2023-06-01
  • 作者简介:陈先意(1983- ),男,湖北恩施人,南京信息工程大学副教授,主要研究方向为区块链安全、大数据安全及人工智能安全
    顾军(1996- ),男,江苏盐城人,南京信息工程大学硕士生,主要研究方向为对抗样本、人工智能安全
    颜凯(1998- ),男,江苏淮安人,南京信息工程大学硕士生,主要研究方向为对抗攻击
    江栋(1998- ),男,四川成都人,南京信息工程大学硕士生,主要研究方向为人工智能安全、对抗样本
    许林峰(1998- ),男,江苏南通人,南京信息工程大学硕士生,主要研究方向为信息安全、对抗样本
    付章杰(1983- ),男,河南南阳人,南京信息工程大学教授、博士生导师,主要研究方向为区块链安全、数字取证、人工智能安全
  • 基金资助:
    国家重点研发计划(2021YFB2700900);国家自然科学基金(62172232);国家自然科学基金(62172233);江苏省杰出青年基金(BK20200039);江苏省大气环境与装备技术协同创新中心基金

Double adversarial attack against license plate recognition system

Xianyi CHEN1, Jun GU1, Kai YAN1, Dong JIANG1, Linfeng XU1, Zhangjie FU1,2   

  1. 1 Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing 210044, China
    2 The State Key Laboratory of Integrated Services Networks, Xidian University, Xi'an 710126, China
  • Revised:2023-04-18 Online:2023-06-25 Published:2023-06-01
  • Supported by:
    The National Key R&D Program of China(2021YFB2700900);The National Natural Science Foundation of China(62172232);The National Natural Science Foundation of China(62172233);The Jiangsu Basic Research Programs-Natural Science Foundation(BK20200039);The Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET) Fund

摘要:

使用深度神经网络的人工智能系统面对对抗样本的攻击表现出极大的脆弱性。因此,在基于深度神经网络的车牌识别系统(LPR,license plate recognition)场景下,提出针对车牌识别系统的双重对抗攻击(DAA,double adversarial attack)方法。通过添加对抗补丁到车牌的图案位置,使LPR的目标检测子系统不能检测到车牌类;通过在车牌号上添加不规则的单连通区域扰动斑点以模拟自然形成的铁锈、污渍等,使 LPR 的车牌号识别子系统产生误识别。针对车牌研究设计不同形状的对抗补丁、不同颜色的对抗斑点,以此产生对抗车牌,并迁移到物理世界中。实验结果表明,设计出的对抗样本既不会被人眼所察觉,也能够欺骗车牌识别系统,如EasyPR,在物理世界中的攻击成功率能够达到99%。关于LPR的对抗攻击及深度学习的脆弱性研究,对提高车牌识别模型的鲁棒性具有积极促进意义。

关键词: 车牌识别, 对抗补丁, 对抗斑点, 对抗攻击

Abstract:

Recent studies have revealed that deep neural networks (DNN) used in artificial intelligence systems are highly vulnerable to adversarial sample-based attacks.To address this issue, a dual adversarial attack method was proposed for license plate recognition (LPR) systems in a DNN-based scenario.It was demonstrated that an adversarial patch added to the pattern location of the license plate can render the target detection subsystem of the LPR system unable to detect the license plate class.Additionally, the natural rust and stains were simulated by adding irregular single-connected area random points to the license plate image, which results in the misrecognition of the license plate number.or the license plate research, different shapes of adversarial patches and different colors of adversarial patches are designed as a way to generate adversarial license plates and migrate them to the physical world.Experimental results show that the designed adversarial samples are undetectable by the human eye and can deceive the license plate recognition system, such as EasyPR.The success rate of the attack in the physical world can reach 99%.The study sheds light on the vulnerability of deep learning and the adversarial attack of LPR, and offers a positive contribution toward improving the robustness of license plate recognition models.

Key words: license plate recognition, adversarial patch, adversarial spot, adversarial attack

中图分类号: 

No Suggested Reading articles found!