Chinese Journal of Network and Information Security ›› 2020, Vol. 6 ›› Issue (5): 36-53.doi: 10.11959/j.issn.2096-109x.2020071

• Papers • Previous Articles     Next Articles

Adversarial attacks and defenses in deep learning

Ximeng LIU1,2(),Lehui XIE1,Yaopeng WANG1,Xuru LI3   

  1. 1 College of Mathematics and Computer Science,Fuzhou University,Fuzhou 350108,China
    2 Guangdong Provincial Key Laboratory of Data Security and Privacy Protection,Guangzhou 510632,China
    3 School of Computer Science and Technology,East China Normal University,Shanghai 200241,China
  • Revised:2020-05-12 Online:2020-10-15 Published:2020-10-19
  • Supported by:
    The National Natural Science Foundation of China(U1804263);The National Natural Science Foundation of China(61702105);Opening Project of Guangdong Provincial Key Laboratory of Data Security and Privacy Protection(2017B030301004-12);The Key Research and Development Program of Shaanxi Province,China(2019KW-053)

Abstract:

The adversarial example is a modified image that is added imperceptible perturbations,which can make deep neural networks decide wrongly.The adversarial examples seriously threaten the availability of the system and bring great security risks to the system.Therefore,the representative adversarial attack methods were analyzed,including white-box attacks and black-box attacks.According to the development status of adversarial attacks and defenses,the relevant domestic and foreign defense strategies in recent years were described,including pre-processing,improving model robustness,malicious detection.Finally,future research directions in the field of adversarial attacks and adversarial defenses were given.

Key words: adversarial examples, adversarial attacks, adversarial defenses, deep learning security

CLC Number: 

No Suggested Reading articles found!