通信学报 ›› 2023, Vol. 44 ›› Issue (11): 79-93.doi: 10.11959/j.issn.1000-436x.2023196

• 专题:复杂环境下分布式边缘智能 • 上一篇    

异构边缘计算环境下异步联邦学习的节点分组与分时调度策略

马千飘1, 贾庆民1, 刘建春2,3, 徐宏力2,3, 谢人超1,4, 黄韬1,4   

  1. 1 网络通信与安全紫金山实验室未来网络研究中心,江苏 南京 211111
    2 中国科学技术大学计算机科学与技术学院,安徽 合肥 230026
    3 中国科学技术大学苏州高等研究院,江苏 苏州 215123
    4 北京邮电大学网络与交换技术国家重点实验室,北京 100876
  • 修回日期:2023-10-08 出版日期:2023-11-01 发布日期:2023-11-01
  • 作者简介:马千飘(1993− ),男,贵州遵义人,博士,网络通信与安全紫金山实验室研究员,主要研究方向为边缘计算、联邦学习、分布式机器学习等
    贾庆民(1990− ),男,山东泰安人,博士,网络通信与安全紫金山实验室研究员,主要研究方向为算力网络、确定性网络、边缘智能、工业互联网等
    刘建春(1996− ),男,江苏盐城人,中国科学技术大学特任副研究员、硕士生导师,主要研究方向为物联网、边缘计算、分布式机器学习等
    徐宏力(1980− ),男,浙江宁波人,中国科学技术大学教授、博士生导师,主要研究方向为物联网、边缘计算、软件定义网络等
    谢人超(1984− ),男,福建南平人,博士,北京邮电大学副教授、硕士生导师,主要研究方向为信息中心网络、移动网络内容分发技术和移动边缘计算等
    黄韬(1980− ),男,重庆人,博士,北京邮电大学教授、博士生导师,主要研究方向为新型网络体系架构、内容分发网络、软件定义网络等
  • 基金资助:
    国家自然科学基金资助项目(U1709217);国家自然科学基金资助项目(61936015);国家自然科学基金资助项目(92267301)

Client grouping and time-sharing scheduling for asynchronous federated learning in heterogeneous edge computing environment

Qianpiao MA1, Qingmin JIA1, Jianchun LIU2,3, Hongli XU2,3, Renchao XIE1,4, Tao HUANG1,4   

  1. 1 Future Network Research Center, Purple Mountain Laboratories, Nanjing 211111, China
    2 School of Computer Science and Technology, University of Science and Technology of China, Hefei 230026, China
    3 Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou 215123, China
    4 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Revised:2023-10-08 Online:2023-11-01 Published:2023-11-01
  • Supported by:
    The National Natural Science Foundation of China(U1709217);The National Natural Science Foundation of China(61936015);The National Natural Science Foundation of China(92267301)

摘要:

为了克服异构边缘计算环境下联邦学习的3个关键挑战,边缘异构性、非独立同分布数据及通信资源约束,提出了一种分组异步联邦学习(FedGA)机制,将边缘节点分为多个组,各个分组间通过异步方式与全局模型聚合进行全局更新,每个分组内部节点通过分时方式与参数服务器通信。理论分析建立了 FedGA 的收敛界与分组间数据分布之间的定量关系。针对分组内节点的通信提出了分时调度策略魔镜法(MMM)优化模型单轮更新的完成时间。基于FedGA的理论分析和MMM,设计了一种有效的分组算法来最小化整体训练的完成时间。实验结果表明,FedGA和MMM相对于现有最先进的方法能降低30.1%~87.4%的模型训练时间。

关键词: 边缘计算, 联邦学习, 非独立同分布数据, 异构性, 收敛分析

Abstract:

To overcome the three key challenges of federated learning in heterogeneous edge computing, i.e., edge heterogeneity, data Non-IID, and communication resource constraints, a grouping asynchronous federated learning (FedGA) mechanism was proposed.Edge nodes were divided into multiple groups, each of which performed global updated asynchronously with the global model, while edge nodes within a group communicate with the parameter server through time-sharing communication.Theoretical analysis established a quantitative relationship between the convergence bound of FedGA and the data distribution among the groups.A time-sharing scheduling magic mirror method (MMM) was proposed to optimize the completion time of a single round of model updating within a group.Based on both the theoretical analysis for FedGA and MMM, an effective grouping algorithm was designed for minimizing the overall training completion time.Experimental results demonstrate that the proposed FedGA and MMM can reduce model training time by 30.1%~87.4% compared to the existing state-of-the-art methods.

Key words: edge computing, federated learning, Non-IID, heterogeneity, convergence analysis

中图分类号: 

No Suggested Reading articles found!