[1] 王健宗, 孔令炜, 黄章成, 等. 联邦学习算法综述[J]. 大数据, 2020, 6(6): 0.
WANG J Z, KONG L W, HUANG Z Z, et al. Research review of federated learning algorithms[J]. Big Data Research, 2020, 6(6): 0.C.
[2] BERTINETTO L, HENRIQUES J F, TORR P H S, et al. Meta-learning with differentiable closed-form solvers[J]. arXiv preprint arXiv:1805.08136, 2018. B. MCMAHAN, E. MOORE, D. RAMAGE, S. HAMPSON, AND B. A. Y ARCAS, "Communication-efficient learning of deep networks from decentralized data," in Artificial Intelligence and Statistics, 2017: PMLR, pp. 1273-1282.(original version on arxiv Fed.2016).
[3] MITCHELL R, MICHALSKI J, Carbonell T. An artificial intelligence approach[J]. 2013.
[4] VILALTA R, DRISSI Y. A perspective view and survey of meta-learning[J]. Artificial intelligence review, 2002, 18(2): 77-95.
[5] YANG Q, LIU Y, CHENG Y, et al. Federated learning[J]. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2019, 13(3): 1-207.
[6] HSU K, LEVINE S, FINN C. Unsupervised learning via meta-learning[J]. arXiv preprint arXiv:1810.02334, 2018.
[7] KHODADADEH S, BOLONI L, SHAH M. Unsupervised meta-learning for few-shot image classification[J]. Advances in neural information processing systems, 2019, 32.
[8] RAVI S, LAROCHELLE H. Optimization as a model for few-shot learning[J]. 2016.
[9] DUAN Y, SCHULMAN J, CHEN X, et al. Rl $^ 2$: Fast reinforcement learning via slow reinforcement learning[J]. arXiv preprint arXiv:1611.02779, 2016.
[10] ANDRYCHOWICZ M, DENIL M, GOMEZ S, et al. Learning to learn by gradient descent by gradient descent[J]. Advances in neural information processing systems, 2016, 29.
[11] SANTORO A, BARTUNOV S, BOTVINICK M, et al. Meta-learning with memory-augmented neural networks[C]//International conference on machine learning. PMLR, 2016: 1842-1850.
[12] MISHRA N, ROHANINEJAD M, CHEN X, et al. A simple neural attentive meta-learner[J]. arXiv preprint arXiv:1707.03141, 2017.
[13] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//International conference on machine learning. PMLR, 2017: 1126-1135.
[14] RAJESWARAN A, FINN C, KAKADE S M, et al. Meta-learning with implicit gradients[J]. Advances in neural information processing systems, 2019, 32.
[15] SONG X, GAO W, YANG Y, et al. Es-maml: Simple hessian-free meta learning[J]. arXiv preprint arXiv:1910.01215, 2019.
[16] SALIMANS T, HO J, CHEN X, et al. Evolution strategies as a scalable alternative to reinforcement learning[J]. arXiv preprint arXiv:1703.03864, 2017.
[17] NICHOL A, ACHIAM J, SCHULMAN J. On first-order meta-learning algorithms[J]. arXiv preprint arXiv:1803.02999, 2018.
[18] RUSU A A, RAO D, SYGNOWSKI J, et al. Meta-learning with latent embedding optimization[J]. arXiv preprint arXiv:1807.05960, 2018.
[19] SUNG F, ZHANG L, XIANG T, et al. Learning to learn: Meta-critic networks for sample efficient learning[J]. arXiv preprint arXiv:1706.09529, 2017.
[20] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[J]. Advances in neural information processing systems, 2016, 29.
[21] SNELL J, SWERSKY K, ZEMEL R. Prototypical networks for few-shot learning[J]. Advances in neural information processing systems, 2017, 30.
[22] KOCH G, ZEMEL R, SALAKHUTDINOV R. Siamese neural networks for one-shot image recognition[C]//ICML deep learning workshop. 2015, 2: 0.
[23] SANTORO A, RAPOSO D, BARRETT D G, et al. A simple neural network module for relational reasoning[J]. Advances in neural information processing systems, 2017, 30.
[24] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of artificial intelligence research, 2002, 16: 321-357.
[25] INOUE H. Data augmentation by pairing samples for images classification[J]. arXiv preprint arXiv:1801.02929, 2018.
[26] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27 .
[27] CUBUK E D, ZOPH B, MANE D, et al. Autoaugment: Learning augmentation policies from data[J]. arXiv preprint arXiv:1805.09501, 2018.
[28] ZHANG H, CAO Z, YAN Z, et al. Sill-net: Feature augmentation with separated illumination representation[J]. arXiv preprint arXiv:2102.03539, 2021.
[29] RAJENDRAN J, IRPAN A, JANG E. Meta-learning requires meta-augmentation[J]. Advances in Neural Information Processing Systems, 2020, 33: 5705-5715.
[30] ZHOU F, LI J, XIE C, et al. Metaaugment: Sample-aware data augmentation policy learning[J]. arXiv preprint arXiv:2012.12076, 2020.
[31] ZHANG R, CHE T, Ghahramani Z, et al. Metagan: An adversarial approach to few-shot learning[J]. Advances in neural information processing systems, 2018, 31.
[32] SUN P, OUYANG Y, ZHANG W, et al. MEDA: Meta-Learning with Data Augmentation for Few-Shot Text Classification[J].
[33] ALEDHARI M, RAZZAK R, PARIZI R M, et al. Federated learning: A survey on enabling technologies, protocols, and applications[J]. IEEE Access, 2020, 8: 140699-140725.
[34] ORAWIWATTANAKUL T, YAMAJI K, NAKAMURA M, et al. User-controlled privacy protection with attribute-filter mechanism for a federated sso environment using shibboleth[C]//2010 International Conference on P2P, Parallel, Grid, Cloud and Internet Computing. IEEE, 2010: 243-249. WANG N, LE J, LI W, et al. Privacy Protection and Efficient Incumbent Detection in Spectrum Sharing Based on Federated Learning[C]//2020 IEEE Conference on Communications and Network Security (CNS). IEEE, 2020: 1-9.
[35] WEI K, LI J, DING M, et al. Federated learning with differential privacy: Algorithms and performance analysis[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454-3469.
[36] ALI W, KUMAR R, DENG Z, et al. A federated learning approach for privacy protection in context-aware recommender systems[J]. The Computer Journal, 2021, 64(7): 1016-1027.
[37] 王健宗, 孔令炜, 黄章成, 等. 联邦学习隐私保护研究进展[J]. 大数据, 7(3): 2021030.
WANG J Z, KONG L W, HUANG Z Z, et al. Research advances on privacy protection of federated learning[J]. Big Data Research, 7(3): 2021030.
[38] YANG Q, LIU Y, CHEN T, et al. Federated machine learning: Concept and applications[J]. ACM Transactions on Intelligent Systems and Technology (TIST), 2019, 10(2): 1-19.
[39] DWORK C. Differential privacy: A survey of results[C]//International conference on theory and applications of models of computation. Springer, Berlin, Heidelberg, 2008: 1-19.
[40] DWORK C, ROTH A. The algorithmic foundations of differential privacy[J]. Found. Trends Theor. Comput. Sci., 2014, 9(3-4): 211-407.
[41] WEI K, LI J, Ding M, et al. Federated learning with differential privacy: Algorithms and performance analysis[J]. IEEE Transactions on Information Forensics and Security, 2020, 15: 3454-3469.
[42] HSU T M H, QI H, BROWN M. Measuring the effects of non-identical data distribution for federated visual classification[J]. arXiv preprint arXiv:1909.06335, 2019.
[43] REDDI S, CHARLES Z, ZAHEER M, et al. Adaptive federated optimization[J]. arXiv preprint arXiv:2003.00295, 2020.
[44] DUCHI J, HAZAN E, SINGER Y. Adaptive subgradient methods for online learning and stochastic optimization[J]. Journal of machine learning research, 2011, 12(7).
[45] KINGMA D P, BA J. Adam: A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
[46] ZAHEER M, REDDI S, SACHAN D, et al. Adaptive methods for nonconvex optimization[J]. Advances in neural information processing systems, 2018, 31.
[47] WANG H, YUROCHKIN M, SUN Y, et al. Federated learning with matched averaging[J]. arXiv preprint arXiv:2002.06440, 2020.
[48] ARIVAZHAGAN M G, AGGARWAL V, SINGH A K, et al. Federated learning with personalization layers[J]. arXiv preprint arXiv:1912.00818, 2019.
[49] ZHAO Y, LI M, LAI L, et al. Federated learning with non-iid data[J]. arXiv preprint arXiv:1806.00582, 2018.
[50] MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Artificial intelligence and statistics. PMLR, 2017: 1273-1282.
[51] BRIGGS C, FAN Z, ANDRAS P. Federated learning with hierarchical clustering of local updates to improve training on non-IID data[C]//2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020: 1-9.
[52] SATTLER F, WIEDEMANN S, MÜLLER K R, et al. Robust and communication-efficient federated learning from non-iid data[J]. IEEE transactions on neural networks and learning systems, 2019, 31(9): 3400-3413.
[53] KAIROUZ P, MCMAHAN H B, AVENT B, et al. Advances and open problems in federated learning[J]. Foundations and Trends® in Machine Learning, 2021, 14(1–2): 1-210.
[54] KONEČNÝ J, MCMAHAN H B, YU F X, et al. Federated learning: Strategies for improving communication efficiency[J]. arXiv preprint arXiv:1610.05492, 2016.
[55] BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning[C]//proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 1175-1191.
[56] ALEDHARI M, RAZZAK R, PARIZI R M, et al. Federated learning: A survey on enabling technologies, protocols, and applications[J]. IEEE Access, 2020, 8: 140699-140725.
[57] LI T, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks[J]. Proceedings of Machine Learning and Systems, 2020, 2: 429-450.
[58] KONEČNÝ J, MCMAHAN B, RAMAGE D. Federated optimization: Distributed optimization beyond the datacenter[J]. arXiv preprint arXiv:1511.03575, 2015.
[59] DENG Y, KAMANI M M, MAHDAVI M. Adaptive personalized federated learning[J]. arXiv preprint arXiv:2003.13461, 2020.
[60] KARIMIREDDY S P, KALE S, MOHRI M, et al. SCAFFOLD: Stochastic Controlled Averaging for On-Device Federated Learning[J]. 2019.
[61] LI T, HU S, BEIRAMI A, et al. Ditto: Fair and robust federated learning through personalization[C]//International Conference on Machine Learning. PMLR, 2021: 6357-6368.
[62] LI Q, HE B, SONG D. Model-contrastive federated learning[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 10713-10722.
[63] VANSCHOREN J. Meta-learning: A survey[J]. arXiv preprint arXiv:1810.03548, 2018.
[64] VILALTA R, DRISSI Y. A perspective view and survey of meta-learning[J]. Artificial intelligence review, 2002, 18(2): 77-95.
[65] 李凡长,刘洋,吴鹏翔,董方,蔡奇,王哲.元学习研究综述[J].计算机学报,2021,44(02):422-446.
LI FAN-ZHANG, LIU YANG, WU PENG-XIANG, DONG FANG, CAI QI, WANG ZHE. A Survey on Recent Advances in Meta-Learning[J]. CHINESE JOURNAL OF COMPUTERS, 2021,44(02):422-446.
[66] CHEN F, LUO M, DONG Z, et al. Federated meta-learning with fast convergence and efficient communication[J]. arXiv preprint arXiv:1802.07876, 2018.
[67] CORINZIA L, BEURET A, BUHMANN J M. Variational federated multi-task learning[J]. arXiv preprint arXiv:1906.06268, 2019.
[68] FALLAH A, MOKHTARI A, OZDAGLAR A. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach[J]. Advances in Neural Information Processing Systems, 2020, 33: 3557-3568.
[69] KHODAK M, BALCAN M F F, TALWALKAR A S. Adaptive gradient-based meta-learning methods[J]. Advances in Neural Information Processing Systems, 2019, 32.
[70] ZHANG M, SAPRA K, FIDLER S, et al. Personalized federated learning with first order model optimization[J]. arXiv preprint arXiv:2012.08565, 2020.
[71] ACAR D A E, ZHAO Y, ZHU R, et al. Debiasing Model Updates for Improving Personalized Federated Training[C]//International Conference on Machine Learning. PMLR, 2021: 21-31.
[72] SINGHAL K, SIDAHMED H, GARRETT Z, et al. Federated reconstruction: Partially local federated learning[J]. Advances in Neural Information Processing Systems, 2021, 34.
[73] ZAFAR M B, VALERA I, GOMEZ RODRIGUEZ M, et al. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment[C]//Proceedings of the 26th international conference on world wide web. 2017: 1171-1180.
[74] LI T, SANJABI M, BEIRAMI A, et al. Fair resource allocation in federated learning[J]. arXiv preprint arXiv:1905.10497, 2019.
[75] Y. JIANG, J. KONEČNÝ, K. RUSH, AND S. KANNAN, "Improving federated learning personalization via model agnostic meta learning," arXiv preprint arXiv:1909.12488, 2019.
[76] LIN S, YANG G, ZHANG J. A collaborative learning framework via federated meta-learning[C]//2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2020: 289-299.
[77] EDMUNDS R, GOLMANT N, RAMASESH V, et al. Transferability of adversarial attacks in model-agnostic meta-learning[C]//Deep Learning and Security Workshop (DLSW) in Singapore. 2017.
[78] YIN C, TANG J, XU Z, et al. Adversarial meta-learning[J]. arXiv preprint arXiv:1806.03316, 2018.
[79] YUE S, REN J, XIN J, et al. Efficient Federated Meta-Learning over Multi-Access Wireless Networks[J]. arXiv preprint arXiv:2108.06453, 2021.
[80] YUE S, REN J, XIN J, et al. Efficient Federated Meta-Learning over Multi-Access Wireless Networks[J]. IEEE Journal on Selected Areas in Communications, 2022.
[81] ELGABLI A, ISSAID C B, BEDI A S, et al. Energy-Efficient and Federated Meta-Learning via Projected Stochastic Gradient Ascent[J]. arXiv preprint arXiv:2105.14772, 2021.
[82] YAO X, HUANG T, ZHANG R X, et al. Federated learning with unbiased gradient aggregation and controllable meta updating[J]. arXiv preprint arXiv:1910.08234, 2019.
[83] CHEN C L, GOLUBCHIK L, PAOLIERI M. Backdoor attacks on federated meta-learning[J]. arXiv preprint arXiv:2006.07026, 2020.
[84] 吴建汉,司世景,王健宗,等.联邦学习攻击与防御综述[J/OL].大数据:1-27[2022-03-08].http://kns.cnki.net/kcms/detail/10.1321.G2.20220216.1702.010.html.
WU J H, SI S J, WANG J Z, et al . An overview of federal learning attack and defense[J/OL]. Big Data Research:1-27[2022-03-08].http://kns.cnki.net/kcms/detail/10.1321.G2.20220216.1702.010.html.
[85] BLANCHARD P, EL MHAMDI E M, GUERRAOUI R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent[J]. Advances in Neural Information Processing Systems, 2017, 30.
[86] YIN D, CHEN Y, KANNAN R, et al. Byzantine-robust distributed learning: Towards optimal statistical rates[C]//International Conference on Machine Learning. PMLR, 2018: 5650-5659.
[87] ARAMOON O, CHEN P Y, QU G, et al. Meta Federated Learning[J]. arXiv preprint arXiv:2102.05561, 2021.
[88] BONAWITZ K, IVANOV V, KREUTER B, et al. Practical secure aggregation for privacy-preserving machine learning[C]//proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. 2017: 1175-1191.
[89] ZHENG W, YAN L, GOU C, et al. Federated meta-learning for fraudulent credit card detection[C]//Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. 2021: 4654-4660.
[90] LI X, LIU S, LI Z, et al. Flowscope: Spotting money laundering based on graphs[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(04): 4731-4738.
[91] DAL POZZOLO A, BORACCHI G, CAELEN O, et al. Credit card fraud detection: a realistic modeling and a novel learning strategy[J]. IEEE transactions on neural networks and learning systems, 2017, 29(8): 3784-3797.
[92] ZHAO H, JI F, GUAN Q, et al. Federated Meta Learning Enhanced Acoustic Radio Cooperative Framework for Ocean of Things Underwater Acoustic Communications[J]. arXiv preprint arXiv:2105.13296, 2021.
[93] LIN Y, REN P, CHEN Z, et al. Meta matrix factorization for federated rating predictions[C]//Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. 2020: 981-990.
[94] JALALIRAD A, SCAVUZZO M, CAPOTA C, et al. A simple and efficient federated recommender system[C]//Proceedings of the 6th IEEE/ACM International Conference on Big Data Computing, Applications and Technologies. 2019: 53-58.
[95] BEEL J. Federated meta-learning: Democratizing algorithm selection across disciplines and software libraries[J]. Science (AICS), 2018, 210: 219.