Journal on Communications ›› 2023, Vol. 44 ›› Issue (8): 37-48.doi: 10.11959/j.issn.1000-436x.2023147

• Papers • Previous Articles    

Communication-efficient distributed precoding design for Massive MIMO

Mian LI1,2,3, Yang LI2,3,4, Zonghui ZHANG1,2, Qingjiang SHI2,5   

  1. 1 School of Science and Engineering, The Chinese University of Hong Kong (Shenzhen), Shenzhen 518172, China
    2 Shenzhen Research Institute of Big Data, Shenzhen 518172, China
    3 Pengcheng Laboratory, Shenzhen 518055, China
    4 Pazhou Laboratory (Huangpu), Guangzhou 510555, China
    5 School of Software Engineering, Tongji University, Shanghai 200092, China
  • Revised:2023-04-21 Online:2023-08-01 Published:2023-08-01
  • Supported by:
    The National Key Research and Development Program of China(2022YFA1003900);The National Natural Science Foundation of China(62071409);The National Natural Science Foundation of China(62231019);The National Natural Science Foundation of China(62101349);Shenzhen Science and Technology Program(RCJC20210609104448114);Major Key Project of Pengcheng Laboratory(PCL2023AS1-2)

Abstract:

A communication-efficient distributed precoding scheme was proposed for multi-baseband processing unit (BBU) baseband processing architecture, aiming to reduce fronthaul data exchange and computational complexity between BBUs.Firstly, a distributed framework based on R-WMMSE algorithm was proposed, which utilized the subspace property of the optimal solution to compress the interactive data losslessly, thereby reducing data exchange.Furthermore, two learnable compression modules based on matrix multiplication were designed, using optimized computing structures and matrix parameters to reduce the parameters and computations while maintaining function expressiveness.Finally, the learnable modules and the distributed precoding framework were jointly optimized with achievable rate as the optimization objective to obtain the final model.The proposed scheme can achieve guaranteed precoding performance under lower requirements on data interaction and computational complexity

Key words: distributed precoding, data compression, deep learning, joint optimization

CLC Number: 

No Suggested Reading articles found!