Chinese Journal of Network and Information Security ›› 2024, Vol. 10 ›› Issue (1): 169-180.doi: 10.11959/j.issn.2096-109x.2024009

• Papers • Previous Articles    

Adversarial patch defense algorithm based on PatchTracker

Zhenjie XIAO1, Shiyu HUANG1, Feng YE1,2, Liqing HUANG1,2, Tianqiang HUANG1,2   

  1. 1 College of Computer and Cyber Security, Fujian Normal University, Fuzhou 350117, China
    2 Digital Fujian Institute of Big Data Security Technology, Fuzhou 350117, China
  • Revised:2023-12-16 Online:2024-02-01 Published:2024-02-01
  • Supported by:
    The National Natural Science Foundation of China(62072106);Fujian Innovation Strategy Research Pro-gram Project(2023R0156)

Abstract:

The application of deep neural networks in target detection has been widely adopted in various fields.However, the introduction of adversarial patch attacks, which add local perturbations to images to mislead deep neural networks, poses a significant threat to target detection systems based on vision techniques.To tackle this issue, an adversarial patch defense algorithm based on PatchTracker was proposed, leveraging the semantic differences between adversarial patches and image backgrounds.This algorithm comprised an upstream patch detector and a downstream data enhancement module.The upstream patch detector employed a YOLOV5 (you only look once-v5) model with attention mechanism to determine the locations of adversarial patches, thereby improving the detection accuracy of small-scale adversarial patches.Subsequently, the detected regions were covered with appropriate pixel values to remove the adversarial patches.This module effectively reduced the impact of adversarial examples without relying on extensive training data.The downstream data enhancement module enhanced the robustness of the target detector by modifying the model training paradigm.Finally, the image with removed patches was input into the downstream YOLOV5 target detection model, which had been enhanced through data augmentation.Cross-validation was performed on the public TT100K traffic sign dataset.Experimental results demonstrated that the proposed algorithm effectively defended against various types of generic adversarial patch attacks when compared to situations without defense measures.The algorithm improves the mean average precision (mAP) by approximately 65% when detecting adversarial patch images, effectively reducing the false negative rate of small-scale adversarial patches.Moreover, compared to existing algorithms, this approach significantly enhances the accuracy of neural networks in detecting adversarial samples.Additionally, the method exhibited excellent compatibility as it does not require modification of the downstream model structure.

Key words: deep learning security, adversarial attack and defense, adversarial patch, object detection

CLC Number: 

No Suggested Reading articles found!