Journal on Communications ›› 2022, Vol. 43 ›› Issue (8): 1-16.doi: 10.11959/j.issn.1000-436x.2022131

• Papers •     Next Articles

Large-scale post-disaster user distributed coverage optimization based on multi-agent reinforcement learning

Wenjun XU1, Silei WU1, Fengyu WANG1, Lan LIN1, Guojun LI2, Zhi ZHANG3   

  1. 1 School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China
    2 Lab of BLOS Trusted Information Transmission, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
    3 School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
  • Revised:2022-05-23 Online:2022-08-25 Published:2022-08-01
  • Supported by:
    The National Key Research and Development Program of China(2019YFC1511302);The National Natural Science Foundation of China(61871057);The National Natural Science Foundation of China(61790553);The Fundamental Research Funds for the Central Universities(2019XD-A13)

Abstract:

In order to quickly restore emergency communication services for large-scale post-disaster users, a distributed intellicise coverage optimization architecture based on multi-agent reinforcement learning (RL) was proposed, which could address the significant differences and dynamics of communication services caused by a large number of access users, and the difficulty of expansion caused by centralized algorithms.Specifically, a distributed k-sums clustering algorithm considering service differences of users was designed in the network characterization layer, which could make each unmanned aerial vehicle base station (UAV-BS) adjust the local networking natively and simply, and obtain states of cluster center for multi-agent RL.In the trajectory control layer, multi-agent soft actor critic (MASAC) with distributed-training-distributed-execution structure was designed for UAV-BS to control trajectory as intelligent nodes.Furthermore, ensemble learning and curriculum learning were integrated to improve the stability and convergence speed of training process.The simulation results show that the proposed distributed k-sums algorithm is superior to the k-means in terms of average load efficiency and clustering balance, and MASAC based trajectory control algorithm can effectively reduce communication interruptions and improve the spectrum efficiency, which outperforms the existing RL algorithms.

Key words: emergency communication, coverage optimization, multi-agent reinforcement learning, distributed training

CLC Number: 

No Suggested Reading articles found!