[1] |
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . ImageNet classification with deep convolutional neural networks[C]// Advances in Neural Information Processing Systems. 2012: 1097-1105.
|
[2] |
HE K , ZHANG X , REN S ,et al. Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 770-778.
|
[3] |
GRAVES A , MOHAMED A , HINTON G . Speech recognition with deep recurrent neural networks[C]// 2013 IEEE International Conference on Acoustics,Speech and Signal Processing. 2013: 6645-6649.
|
[4] |
HERMANN K M , BLUNSOM P . Multilingual distributed representations without word alignment[J]. arXiv preprint arXiv:1312.6173, 2013.
|
[5] |
BAHDANAU D , CHO K , BENGIO Y . Neural machine translation by jointly learning to align and translate[J]. arXiv preprint arXiv:1409.0473, 2014.
|
[6] |
MNIH V , KAVUKCUOGLU K , SILVER D ,et al. Playing atari with deep reinforcement learning[J]. arXiv preprint arXiv:1312.5602, 2013.
|
[7] |
PARKHI O M , VEDALDI A , ZISSERMAN A ,et al. Deep face recognition[C]// Proceedings of the British Machine Vision Conference (BMVC). 2015.
|
[8] |
SUN Y , WANG X , TANG X . Deep learning face representation from predicting 10 000 classes[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014: 1891-1898.
|
[9] |
CHEN C , SEFF A , KORNHAUSER A ,et al. Deepdriving:learning affordance for direct perception in autonomous driving[C]// Proceedings of the IEEE International Conference on Computer Vision. 2015: 2722-2730.
|
[10] |
SZEGEDY C , ZAREMBA W , SUTSKEVER I ,et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013.
|
[11] |
GONG Y , LI B , POELLABAUER C ,et al. Real-time adversarial attacks[J]. arXiv preprint arXiv:1905.13399, 2019.
|
[12] |
GU T , DOLAN-GAVITT B , GARG S . BadNets:Identifying vulnerabilities in the machine learning model supply chain[J]. arXiv preprint arXiv:1708.06733, 2017.
|
[13] |
LIU Y , MA S , AAFER Y ,et al. Trojaning attack on neural networks[R]. 2017.
|
[14] |
SHAFAHI A , HUANG W R , NAJIBI M ,et al. Poison Frogs! targeted clean-label poisoning attacks on neural networks[C]// Advances in Neural Information Processing Systems. 2018: 6103-6113.
|
[15] |
ZOU M , SHI Y , WANG C ,et al. PoTrojan:powerful neural-level trojan designs in deep learning models[J]. arXiv preprint arXiv:1802.03043, 2018.
|
[16] |
ZHU C , HUANG W R , SHAFAHI A ,et al. Transferable clean-label attack poisoning attacks on deep neural nets[J]. arXiv Preprint arXiv:1905.05897, 2019.
|
[17] |
BAGDASARYAN E , VEIT A , HUA Y ,et al. How to backdoor federated learning[J]. arXiv preprint arXiv:1807.00459, 2018.
|
[18] |
SUN Z , KAIROUZ P , SURESH A T ,et al. Can you really backdoor federated learning?[J]. arXiv Preprint arXiv:1911.07963, 2019.
|
[19] |
YANG Z , IYER N , REIMANN J ,et al. Design of intentional backdoors in sequential models[J]. arXiv preprint arXiv:1902.09972, 2019.
|
[20] |
KIOURTI P , WARDEGA K , JHA S ,et al. TrojDRL:trojan attacks on deep reinforcement learning agents[J]. arXiv Preprint arXiv:1903.06638, 2019.
|
[21] |
YAO Y , LI H , ZHENG H ,et al. Regula sub-rosa:latent backdoor attacks on deep neural networks[J]. arXiv preprint arXiv:1905.10447, 2019.
|
[22] |
YAO Y , LI H , ZHENG H ,et al. Latent backdoor attacks on deep neural networks[C]// Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019: 2041-2055.
|
[23] |
XU G , LI H , LIU S ,et al. VerifyNet:secure and verifiable federated learning[J]. IEEE Transactions on information forensics and Security, 2019,15: 911-926.
|
[24] |
TAN T J L , SHOKRI R . Bypassing backdoor detection algorithms in deep learning[J]. arXiv Preprint arXiv:1905.13409, 2019.
|
[25] |
JI Y , ZHANG X , WANG T . Backdoor attacks against learning systems[C]// 2017 IEEE Conference on Communications and Network Security (CNS). 2017: 1-9.
|
[26] |
BIGGIO B , NELSON B , LASKOV P . Poisoning attacks against support vector machines[J]. arXiv Preprint arXiv:1206.6389, 2012.
|
[27] |
HUANG L , JOSEPH A D , NELSON B ,et al. Adversarial machine learning[C]// Proceedings of the 4th ACM Workshop on Security and Artificial intelligence. 2011: 43-58.
|
[28] |
MAHLOUJIFAR S , MAHMOODY M , MOHAMMED A . Multi-party poisoning through generalized $ p $-tampering[J]. arXiv preprint arXiv:1809.03474, 2018.
|
[29] |
CHEN X , LIU C , LI B ,et al. Targeted backdoor attacks on deep learning systems using data poisoning[J]. arXiv preprint arXiv:1712.05526, 2017.
|
[30] |
DAI J , CHEN C , LI Y . A backdoor attack against LSTM-based text classification systems[J]. IEEE Access, 2019,7: 138872-138878.
|
[31] |
DUMFORD J , SCHEIRER W . Backdooring convolutional neural networks via targeted weight perturbations[J]. arXiv Preprint arXiv:1812.03128, 2018.
|
[32] |
JI Y , ZHANG X , JI S ,et al. Model-reuse attacks on deep learning systems[C]// Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 2018: 349-363.
|
[33] |
PAN S J , YANG Q . A survey on transfer learning[J]. IEEE Transactions on knowledge and data engineering, 2009,22(10): 1345-1359.
|
[34] |
刘全, 翟建伟, 章宗长 ,等. 深度强化学习综述[J]. 计算机学报, 2018,41(01): 1-27.
|
|
LIU Q , ZHAI J W , ZHANG Z C ,et al. A survey on deep reinforcement learning[J]. Chinese Journal of Computers, 2018,4(1): 1-27.
|
[35] |
Decentralized ML[EB].
|
[36] |
KONE?NY J , MCMAHAN H B , YU F X ,et al. Federated learning:Strategies for improving communication efficiency[J]. arXiv Preprint arXiv:1610.05492, 2016.
|
[37] |
MCMAHAN H B , MOORE E , RAMAGE D ,et al. Communication-efficient learning of deep networks from decentralized data[J]. arXiv Preprint arXiv:1602.05629, 2016.
|
[38] |
TRAN B , LI J , MADRY A . Spectral signatures in backdoor attacks[C]// Advances in Neural Information Processing Systems. 2018: 8000-8010.
|
[39] |
CHEN B , CARVALHO W , BARACALDO N ,et al. Detecting backdoor attacks on deep neural networks by activation clustering[J]. arXiv Preprint arXiv:1811.03728, 2018.
|
[40] |
LIU K , DOLAN-GAVITT B , GARG S . Fine-pruning:Defending against backdooring attacks on deep neural networks[C]// International Symposium on Research in Attacks,Intrusions,and Defenses. Springer,Cham, 2018: 273-294.
|
[41] |
PANG R . The tale of evil twins:adversarial inputs versus backdoored models[J]. arXiv preprint arXiv:1911.01559, 2019.
|