[1] |
ATENIESE G , MANCINI L V , SPOGNARDI A ,et al. Hacking smart machines with smarter ones:how to extract meaningful data from machine learning classifiers[J]. International Journal of Security and Networks, 2015,10(3): 137-150.
|
[2] |
JUUTI M , SZYLLER S , MARCHAL S ,et al. PRADA:protecting against DNN model stealing attacks[C]// In IEEE European Symposium on Security and Privacy. 2019: 512-527.
|
[3] |
YANG Q , LIU Y , CHEN T ,et al. Federated machine learning:concept and applications[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2019,10(2): 1-19.
|
[4] |
PAPERNOT N , MCDANIEL P D , GOODFELLOW I J ,et al. Practical black-box attacks against machine learning[C]// In Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. 2017: 506-519.
|
[5] |
TRAMèR F , ZHANG F , JUELS A ,et al. Stealing machine learning models via prediction APIs[C]// In 25th USENIX Security Symposium,USENIX Security 16. 2016: 601-618.
|
[6] |
WANG B H , GONG N Z . Stealing hyperparameters in machine learning[C]// In 2018 IEEE Symposium on Security and Privacy. 2018: 36-52.
|
[7] |
OH S J , SCHIELE B , FRITZ M . Towards reverse-engineering black-box neural networks[J]. arXiv:1711.01768, 2019.
|
[8] |
SATHISH K , RAMASUBBAREDDY S , GOVINDA K . Detection and localization of multiple objects using VGGNet and single shot detection[M]// Emerging Research in Data Engineering Systems and Computer Communications.Singapore:Springer. 2020: 427-439.
|
[9] |
TARG S , ALMEIDA D , LYMAN K . Resnet in resnet:generalizing residual architectures[J]. arXiv preprint arXiv:1603.08029, 2016.
|
[10] |
CORREIA-SILVA J R , BERRIEL R F , BADUE C ,et al. Copycat CNN:stealing knowledge by persuading confession with random non-labeled data[C]// In 2018 International Joint Conference on Neural Networks. 2018: 1-8.
|
[11] |
BATINA L,BHASINS , JAP D ,et al. CSI NN:reverse engineering of neural network architectures through electromagnetic side channel[C]// In 28th USENIX Security Symposium,USENIX Security 2019. 2019: 515-532.
|
[12] |
YU H G , YANG K C , ZHANG T ,et al. Cloudleak:large-scale deep learning models stealing through adversarial examples[C]// Network and Distributed System Security Symposium. 2020.
|
[13] |
FREDRIKSON M , JHA S , RISTENPART T . Model inversion attacks that exploit confidence information and basic countermeasures[C]// In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 2015: 1322-1333.
|
[14] |
JANG E , GU S , POOLE B . Categorical reparameterization with gumbel-softmax[J]. arXiv preprint arXiv:1611.01144, 2016.
|
[15] |
SHOKRI R , STRONATI M , SONG C Z ,et al. Membership inference attacks against machine learning models[C]// In 2017 IEEE Symposium on Security and Privacy. 2017: 3-18.
|
[16] |
YEOM S , GIACOMELLI I , FREDRIKSON M ,et al. Privacy risk in machine learning:analyzing the connection to overfitting[C]// In 31st IEEE Computer Security Foundations Symposium. 2018: 268-282.
|
[17] |
SALEM A , ZHANG Y , HUMBERT M ,et al. Ml-leaks:model and data independent membership inference attacks and defenses on machine learning models[C]// In 26th Annual Network and Distributed System Security Symposium. 2019: 24-27.
|
[18] |
LONG Y H , BINDSCHAEDLER V , GUNTER C A . Towards measuring membership privacy[J]. CoRR,abs/1712.09136, 2017.
|
[19] |
LONG Y H , BINDSCHAEDLER V , WANG L ,et al. Understanding membership inferences on well-generalized learning models[J]. CoRR,abs/1802.04889, 2018.
|
[20] |
YEOM S , FREDRIKSON M , JHA S . The unintended consequences of overfitting:Training data inference attacks[J]. CoRR,abs/1709.01604, 2017.
|
[21] |
SAM D B , SURYA S , BABU R V . Switching convolutional neural network for crowd counting[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017: 4031-4039.
|
[22] |
KUHA J , MILLS C . On group comparisons with logistic regression models[J]. Sociological Methods & Research, 2020,49(2): 498-525.
|
[23] |
PAL M . Random forest classifier for remote sensing classification[J]. International journal of remote sensing, 2005,26(1): 217-222.
|
[24] |
SONG L , SHOKRI R , MITTAL P . Privacy risks of securing machine learning models against adversarial examples[C]// Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019: 241-257.
|
[25] |
SALEM A , BHATTACHARYA A , BACKES M ,et al. Updates-leak:data set inference and reconstruction attacks in online learning[J]. arXiv preprint arXiv:1904.01067, 2019.
|
[26] |
HAYES J , MELIS L , DANEZIS G ,et al. LOGAN:membership inference attacks against generative models[J]. PoPETs, 2019(1): 133-152.
|
[27] |
NASR M , SHOKRI R , HOUMANSADR A . Comprehensive privacy analysis of deep learning:passive and active white-box inference attacks against centralized and federated learning[C]// In 2019 IEEE Symposium on Security and Privacy. 2019: 739-753.
|
[28] |
LEINO K , FREDRIKSON M . Stolen memories:leveraging model memorization for calibrated white-box membership inference[J]. arXiv preprint arXiv:1906.11798, 2019.
|
[29] |
MELIS L , SONG C Z , CRISTOFARO E D ,et al. Exploiting unintended feature leakage in collaborative learning[C]// In 2019 IEEE Symposium on Security and Privacy. 2019: 691-06.
|
[30] |
WANG Z B,SONG M K , Zhang Z F , Yet al . Beyond inferring class representatives:user-level privacy leakage from federated learning[C]// In 2019 IEEE conference on Computer Communications. 2019: 2512-2520.
|
[31] |
HITAJ B , ATENIESE G,PéREZ-CRUZ F . Deep models under the GAN:information leakage from collaborative deep learning[C]// In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 603-618.
|
[32] |
ZHU L G , LIU Z J , HAN S . Deep leakage from gradients[C]// In Advances in Neural Information Processing Systems Annual Conference on Neural Information Processing Systems 2019. 2019: 14747-14756.
|
[33] |
FREDRIKSON M , JHA S , RISTENPART T . Model inversion attacks that exploit confidence information and basic countermeasures[C]// In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. 2015: 1322-1333.
|
[34] |
NASR M , SHOKRI R , HOUMANSADR A . Machine learning with membership privacy using adversarial regularization[C]// In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. 2018: 634-646.
|
[35] |
WANG C , LIU G Y , HUANG H J ,et al. MIASec:enabling data indistinguishability against membership inference attacks in MLaaS[J]. IEEE Transactions on Sustainable Computing, 2020,5(3): 365-376.
|
[36] |
WU N , FAROKHI F , SMITH D ,et al. The Value of collaboration in convex machine learning with differential privacy[J]. IEEE Symposium on Security and Privacy, 2020: 304-317.
|
[37] |
PATRA A , SURESH A . BLAZE:blazing fast privacy-preserving machine learning[J]. arXiv preprint arXiv:2005.09042, 2020.
|
[38] |
JIA J Y , SALEM A , BACKES M ,et al. MemGuard:defending against black-box membership inference attacks via adversarial examples[C]// In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security,CCS 2019. 2019: 259-274.
|
[39] |
HE Y Z , MENG G Z , CHEN K ,et al. Towards privacy and security of deep learning systems:a survey[J]. arXiv:1911.12562, 2019.
|
[40] |
KESARWANI M , MUKHOTY B , ARYA V ,et al. Model extraction warning in MLaaS paradigm[C]// In Proceedings of the 34th Annual Computer Security Applications Conference,ACSAC 2018. 2018: 371-380.
|
[41] |
OH S J , SCHIELE B , FRITZ M . Towards reverse-engineering black-box neural networks[M]// Explainable AI:Interpreting,Explaining and Visualizing Deep Learning. Springer,Cham, 2019: 121-144.
|
[42] |
OREKONDY T , SCHIELE B , FRITZ M . Knockoff nets:Stealing functionality of black-box models[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019: 4954-4963.
|
[43] |
CHANDRASEKARAN V , CHAUDHURI K , GIACOMELLI I ,et al. Exploring connections between active learning and model extraction[J]. arXiv preprint arXiv:1811.02054, 2018.
|
[44] |
PENGCHENG L , YI J , ZHANG L . Query-efficient black-box attack by active learning[C]// 2018 IEEE International Conference on Data Mining (ICDM). 2018: 1200-1205.
|
[45] |
ILYAS A , ENGSTROM L , ATHALYE A ,et al. Black-box adversarial attacks with limited queries and information[J]. arXiv preprint arXiv:1804.08598, 2018.
|