[3] |
DAHL G E , STOKES J W , DENG L ,et al. Large-scale malware classification using random projections and neural networks[C]// 2013 IEEE International Conference on Acoustics,Speech and Signal Processing. 2013: 3422-3426.
|
[4] |
MIRSKY Y , DOITSHMAN T , ELOVICI Y ,et al. Kitsune:an ensemble of autoencoders for online network intrusion detection[J]. arXiv preprint arXiv:1802.09089, 2018.
|
[5] |
SZEGEDY C , ZAREMBA W , SUTSKEVER I ,et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013.
|
[6] |
DHILLON G S , AZIZZADENESHELI K , LIPTON Z C ,et al. Stochastic activation pruning for robust adversarial defense[J]. arXiv preprint arXiv:1803.01442, 2018.
|
[7] |
GOODFELLOW I J , SHLENS J , SZEGEDY C . Explaining and harnessing adversarial examples[J]. arXiv:preprint arXiv:1412.6572, 2014.
|
[8] |
KURAKIN A , GOODFELLOW I , BENGIO S . Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016.
|
[9] |
CARLINI N , WAGNER D . Defensive distillation is not robust to adversarial examples[J]. arXiv preprint arXiv:1607.04311, 2016.
|
[10] |
PAPERNOT N , MCDANIEL P , JHA S ,et al. The limitations of deep learning in adversarial settings[C]// 2016 IEEE European Symposium on Security and Privacy (EuroS&P). 2016: 372-387.
|
[11] |
LIU Y , MA S , AAFER Y ,et al. Trojaning attack on neural networks[C]// Network and Distributed System Security Symposium. 2018.
|
[12] |
GU S , RIGAZIO L . Towards deep neural network architectures robust to adversarial examples[J]. arXiv preprint arXiv:1412.5068, 2014.
|
[13] |
FEINMAN R , CURTIN R R , SHINTRE S ,et al. Detecting adversarial samples from artifacts[J]. arXiv preprint arXiv:1703.00410, 2017.
|
[14] |
GROSSE K , MANOHARAN P , PAPERNOT N ,et al. On the (statistical) detection of adversarial examples[J]. arXiv preprint arXiv:1702.06280, 2016.
|
[15] |
MA X , LI B , WANG Y ,et al. Characterizing adversarial subspaces using local intrinsic dimensionality[J]. arXiv preprint arXiv:1801.02613, 2018.
|
[16] |
XU W , EVANS D , QI Y . Feature squeezing:detecting adversarial examples in deep neural networks[J]. arXiv preprint arXiv:1704.01155, 2017.
|
[17] |
MENG D , CHEN H . Magnet:a two-pronged defense against adversarial examples[C]// The 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 135-147.
|
[18] |
LIAO F , LIANG M , DONG Y ,et al. Defense against adversarial attacks using high-level representation guided denoiser[C]// The IEEE Conference on Computer Vision and Pattern Recognition. 2018: 1778-1787.
|
[1] |
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . Imagenet classification with deep convolutional neural networks[C]// Advances in Neural Information Processing Systems. 2012: 1097-1105.
|
[2] |
BOJARSKI M , Del TESTA D , DWORAKOWSKI D ,et al. End to end learning for self-driving cars[J]. arXiv preprint arXiv:1604.07316, 2016.
|
[19] |
LECUN Y , BOTTOU L , BENGIO Y ,et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11): 2278-2324.
|
[20] |
BROWN T B , MANé D , ROY A ,et al. Adversarial patch[J]. arXiv preprint arXiv:1712.09665, 2017.
|
[21] |
EYKHOLT K , EVTIMOV I , FERNANDES E ,et al. Robust physical-world attacks on deep learning models[J]. arXiv preprint arXiv:1707.08945, 2017.
|
[22] |
PEI K , CAO Y , YANG J ,et al. Deepxplore:automated whitebox testing of deep learning systems[C]// The 26th Symposium on Operating Systems Principles. 2017: 1-18.
|
[23] |
BIGGIO B , ROLI F . Wild patterns:ten years after the rise of adversarial machine learning[J]. Pattern Recognition, 2018,84: 317-331.
|
[24] |
MOOSAVI-DEZFOOLI S M , FAWZI A , FROSSARD P . DeepFool:a simple and accurate method to fool deep neural networks[C]// IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2574-2582.
|
[25] |
CARLINI N , WAGNER D . Towards evaluating the robustness of neural networks[C]// 2017 IEEE Symposium on Security and Privacy (SP). 2017: 39-57.
|
[26] |
KINGMA D P , BA J . Adam:a method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
|
[27] |
ROUHANI B D , SAMRAGH M , JAVAHERIPIM ,et al. Deepfense:online accelerated defense against adversarial deep learning[C]// IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 2018: 1-8.
|
[28] |
SONG Y , KIM T , NOWOZIN S ,et al. Pixeldefend:leveraging generative models to understand and defend against adversarial examples[J]. arXiv preprint arXiv:1710.10766, 2017.
|
[29] |
XIE C , WANG J , ZHANG Z ,et al. Mitigating adversarial effects through randomization[J]. arXiv preprint arXiv:1711.01991, 2017.
|
[30] |
PAPERNOT N , MCDANIEL P , SINHA A ,et al. Towards the science of security and privacy in machine learning[J].,2016. arXiv preprint arXiv:1611.03814, 2016.
|
[31] |
PAPERNOT N , MCDANIEL P , WU X ,et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// 2016 IEEE Symposium on Security and Privacy (SP). 2016: 582-597.
|
[32] |
PAPERNOT N , MCDANIEL P , GOODFELLOW I ,et al. Practical black-box attacks against machine learning[C]// ACM on Asia Conference on Computer and Communications Security. 2017: 506-519.
|
[33] |
ATHALYE A , CARLINI N , WAGNER D . Obfuscated gradients give a false sense of security:circumventing defenses to adversarial examples[J]. arXiv preprint arXiv:1802.00420, 2018.
|
[34] |
BHAGOJI A N , CULLINA D , MITTAL P . Dimensionality reduction as a defense against evasion attacks on machine learning classifiers[J]. arXiv preprint arXiv:1704.02654, 2017.
|
[35] |
GONG Z , WANG W , KU W S . Adversarial and clean data are not twins[J]. arXiv preprint arXiv:1704.04960, 2017.
|
[36] |
HENDRYCKS D , GIMPEL K . Early methods for detecting adversarial images[J]. arXiv preprint arXiv:1608.00530, 2016.
|
[37] |
TAX D M J , DUIN R P W . Support vector domain description[J]. Pattern Recognition Letters, 1999,20(11-13): 1191-1199.
|
[38] |
CARLINI N , WAGNER D . Adversarial examples are not easily detected:bypassing ten detection methods[C]// The 10th ACM Workshop on Artificial Intelligence and Security. 2017: 3-14.
|
[39] |
LU P H , CHEN P Y , YU C M . On the limitation of local intrinsic dimensionality for characterizing the subspaces of adversarial examples[J]. arXiv preprint arXiv:1803.09638, 2018.
|
[40] |
GUO C , RANA M , CISSE M ,et al. Countering adversarial images using input transformations[J]. arXiv preprint arXiv:1711.00117, 2017.
|
[41] |
TAO G , MA S , LIU Y ,et al. Attacks meet interpretability:attribute-steered detection of adversarial samples[C]// Advances in Neural Information Processing Systems. 2018: 7717-7728.
|
[42] |
GILMER J , METZ L , FAGHRI F ,et al. Adversarial spheres[J]. arXiv preprint arXiv:1801.02774, 2018.
|
[43] |
PERERA P , PATEL V M . Learning deep features for one-class classification[J]. IEEE Transactions on Image Processing, 2019,28(11): 5450-5463.
|
[44] |
TAX D M J , DUIN R P W . Data domain description using support vectors[C]// ESANN. 1999,99: 251-256.
|
[45] |
KRIZHEVSKY A , HINTON G . Learning multiple layers of features from tiny images[R]. Technical Report,University of Toronto, 2009.
|
[46] |
RAUBER J , BRENDEL W , BETHGE M . Foolbox:a Python toolbox to benchmark the robustness of machine learning models[J]. arXiv preprint arXiv:1707.04131, 2017.
|
[47] |
SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
|