[1] |
RODRIGUES T K , SUTO K , KATO N . Edge cloud server deployment with transmission power control through machine learning for 6G Internet of things[J]. IEEE Transactions on Emerging Topics in Computing, 2021,9(4): 2099-2108.
|
[2] |
MCMAHAN B , MOORE E , RAMAGE D ,et al. Communication-efficient learning of deep networks from decentralized data[C]// Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS). Piscataway:IEEE Press, 2017: 1273-1282.
|
[3] |
LI K J , XIAO C H . CBFL:a communication-efficient federated learning framework from data redundancy perspective[J]. IEEE Systems Journal, 2022,16(4): 5572-5583.
|
[4] |
SATTLER F , WIEDEMANN S , MüLLER K R ,et al. Robust and communication-efficient federated learning from non-i.i.d.data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2019,31(9): 3400-3413.
|
[5] |
LI K J , XIAO C H . PBFL:communication-efficient federated learning via parameter predicting[J]. The Computer Journal, 2023,66(3): 626-642.
|
[6] |
KONECNY J , MCMAHAN H B , YU F X ,et al. Federated learning:strategies for improving communication efficiency[J]. arXiv Preprint,arXiv:1610.05492, 2016.
|
[7] |
LI T , SAHU A K , TALWALKAR A ,et al. Federated learning:challenges,methods,and future directions[J]. IEEE Signal Processing Magazine, 2020,37(3): 50-60.
|
[8] |
LI K J , XIAO C H . Federated learning communication-efficiency framework via corset construction[J]. Computer Journal, 2022,doi:10.1093/comjnl/bxac062.
|
[9] |
TELLEZ D , LITJENS G , LAAK J ,et al. Neural image compression for gigapixel histopathology image analysis[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021,43(2): 567-578.
|
[10] |
LI X , HUANG K X , YANG W H ,et al. On the convergence of FedAvg on Non-IID data[C]// Proceedings of the International Conference on Learning Representations (ICLR). Piscataway:IEEE Press, 2020: 1-26.
|
[11] |
TAO Z , LI Q . eSGD:communication efficient distributed deep learning on the edge[C]// 2018 Hot Topics in Edge Computing (HotEdge 18). Piscataway:IEEE Press, 2018: 1-6.
|
[12] |
YU H , YANG S , ZHU S H . Parallel restarted SGD with faster convergence and less communication:demystifying why model averaging works for deep learning[C]// Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto:AAAI Press, 2019: 5693-5700.
|
[13] |
XU J J , DU W L , JIN Y C ,et al. Ternary compression for communication-efficient federated learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022,33(3): 1162-1176.
|
[14] |
LI S Q , QI Q , WANG J Y ,et al. GGS:general gradient sparsification for federated learning in edge computing[C]// Proceedings of 2020 IEEE International Conference on Communications (ICC). Piscataway:IEEE Press, 2020: 1-7.
|
[15] |
HAN P C , WANG S Q , LEUNG K K . Adaptive gradient sparsification for efficient federated learning:an online learning approach[C]// Proceedings of 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS). Piscataway:IEEE Press, 2021: 300-310.
|
[16] |
OZFATURA E , OZFATURA K , GüNDüZ D . Time-correlated sparsification for communication-efficient federated learning[C]// Proceedings of 2021 IEEE International Symposium on Information Theory (ISIT). Piscataway:IEEE Press, 2021: 461-466.
|
[17] |
ASAD M , MOUSTAFA A , ITO T . FedOpt:towards communication efficiency and privacy preservation in federated learning[J]. Applied Sciences, 2020,10(8): 2864.
|
[18] |
BERNSTEIN J , WANG Y , AZIZZADENESHELI K ,et al. signSGD:compressed optimisation for non-convex problems[C]// Proceedings of the 35th International Conference on Machine Learning. Piscataway:IEEE Press, 2018: 560-569.
|
[19] |
REISIZADEH A , MOKHTARI A , HASSANI H ,et al. FedPAQ:a communication-efficient federated learning method with periodic averaging and quantization[C]// Proceedings of the 23rdInternational Conference on Artificial Intelligence and Statistics (AISTATS). Piscataway:IEEE Press, 2020: 2021-2031.
|
[20] |
AMIRI M M , GUNDUZ D , KULKARNI S R ,et al. Federated learning with quantized global model updates[J]. arXiv Preprint,arXiv:2020.10672, 2020.
|
[21] |
NORI M K , YUN S , KIM I M . Fast federated learning by balancing communication trade-offs[J]. IEEE Transactions on Communications, 2021,69(8): 5168-5182.
|
[22] |
JHUNJHUNWALA D , GADHIKAR A , JOSHI G ,et al. Adaptive quantization of model updates for communication-efficient federated learning[C]// Proceedings of 2021 IEEE International Conference on Acoustics,Speech and Signal Processing (ICASSP). Piscataway:IEEE Press, 2021: 3110-3114.
|
[23] |
DU Y , YANG S , HUANG H . High-dimensional stochastic gradient quantization for communication-efficient edge learning[C]// Proceed ings of 2019 IEEE Global Conference on Signal and Information Processing (Global SIP). Piscataway:IEEE Press, 2019: 1-5.
|
[24] |
LIAN Z , CAO Z , ZUO Y ,et al. AGQFL:communication-efficient federated learning via automatic gradient quantization in edge heterogeneous systems[C]// Proceedings of 2021 IEEE 39th International Conference on Computer Design (ICCD). Piscataway:IEEE Press, 2021: 551-558.
|
[25] |
BāDOIU M , HAR-PELED S , INDYK P . Approximate clustering via core-sets[C]// Proceedings of the 34th Annual ACM Symposium on Theory of Computing. New York:ACM Press, 2002: 250-257.
|
[26] |
VLADIMIR B , DAN F , HARRY L ,et al. Efficient coreset constructions via sensitivity sampling[C]// Proceedings of the 13th Asian Con ference on Machine Learning. Piscataway:IEEE Press, 2021: 948-963.
|
[27] |
LU H L , LI M J , HE T ,et al. Robust coreset construction for distributed machine learning[J]. IEEE Journal on Selected Areas in Communications, 2020,38(10): 2400-2417.
|
[28] |
CAMPBELL T , BRODERICK T . Bayesian coreset construction via greedy iterative geodesic ascent[C]// Proceedings of the 35th International Conference on Machine Learning. Piscataway:IEEE Press, 2018: 698-706.
|
[29] |
FAN Y W , LI H S . Communication efficient coreset sampling for distributed learning[C]// Proceedings of 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). Piscataway:IEEE Press, 2018: 1-5.
|
[30] |
MOCANU D C , MOCANU E , STONE P ,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science[J]. Nature Communications, 2018,9:2383.
|
[31] |
LI A , SUN J , WANG B ,et al. LotteryFL:personalized and communication-efficient federated learning with lottery ticket hypothesis on non-IID datasets[J]. arXiv Preprint,arXiv:2008.03371, 2020.
|