25 June 2022, Volume 7 Issue 2
    

  • Select all
    |
    Special Focus: Edge Artificial Intelligence for 6G
  • Luyi Chang, Zhe Zhang, Pei Li, Shan Xi, Wei Guo, Yukang Shen, Zehui Xiong, Jiawen Kang, Dusit Niyato, Xiuquan Qiao, Yi Wu
    Journal of Communications and Information Networks. 2022, 7(2): 107-121. https://doi.org/10.23919/JCIN.2022.9815195
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Sixth generation (6G) enabled edge intelligence opens up a new era of Internet of everything and makes it possible to interconnect people-devices-cloud anytime, anywhere. More and more next-generation wireless network smart service applications are changing our way of life and improving our quality of life. As the hottest new form of next-generation Internet applications, Metaverse is striving to connect billions of users and create a shared world where virtual and reality merge. However, limited by resources, computing power, and sensory devices,Metaverse is still far from realizing its full vision of immersion,materialization,and interoperability. To this end,this survey aims to realize this vision through the organic integration of 6G-enabled edge artificial intelligence(AI)and Metaverse.Specifically,we first introduce three new types of edge-Metaverse architectures that use 6G-enabled edge AI to solve resource and computing constraints in Metaverse. Then we summarize technical challenges that these architectures face in Metaverse and the existing solutions. Furthermore, we explore how the edge-Metaverse architecture technology helps Metaverse to interact and share digital data. Finally, we discuss future research directions to realize the true vision of Metaverse with 6G-enabled edge AI.

  • Rongkang Dong, Yuyi Mao, Jun Zhang
    Journal of Communications and Information Networks. 2022, 7(2): 122-134. https://doi.org/10.23919/JCIN.2022.9815196
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    By leveraging the data sample diversity,the early-exit network recently emerges as a prominent neural network architecture to accelerate the deep learning inference process.However,intermediate classifiers of the early exits introduce additional computation overhead, which is unfavorable for resource-constrained edge artificial intelligence (AI). In this paper, we propose an early exit prediction mechanism to reduce the on-device computation overhead in a device-edge co-inference system supported by early-exit networks. Specifically,we design a low-complexity module, namely the exit predictor, to guide some distinctly“hard”samples to bypass the computation of the early exits. Besides, considering the varying communication bandwidth, we extend the early exit prediction mechanism for latency-aware edge inference, which adapts the prediction thresholds of the exit predictor and the confidence thresholds of the early-exit network via a few simple regression models. Extensive experiment results demonstrate the effectiveness of the exit predictor in achieving a better tradeoff between accuracy and on-device computation overhead for early-exit networks. Besides, compared with the baseline methods,the proposed method for latency-aware edge inference attains higher inference accuracy under different bandwidth conditions.

  • Tong Yin, Lixin Li, Donghui Ma, Wensheng Lin, Junli Liang, Zhu Han
    Journal of Communications and Information Networks. 2022, 7(2): 135-144. https://doi.org/10.23919/JCIN.2022.9815197
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    In recent years, federated learning (FL) has played an important role in private data-sensitive scenarios to perform learning tasks collectively without data exchange. However, due to the centralized model aggregation for heterogeneous devices in FL, the last updated model after local training delays the convergence, which increases the economic cost and dampens clients’ motivations for participating in FL. In addition, with the rapid development and application of intelligent reflecting surface (IRS) in the next-generation wireless communication, IRS has proven to be one effective way to enhance the communication quality. In this paper,we propose a framework of federated learning with IRS for grouped heterogeneous training(FLIGHT)to reduce the latency caused by the heterogeneous communication and computation of the clients. Specifically, we formulate a cost function and a greedy-based grouping strategy, which divides the clients into several groups to accelerate the convergence of the FL model. The simulation results verify the effectiveness of FLIGHT for accelerating the convergence of FL with heterogeneous clients.Besides the exemplified linear regression (LR) model and convolutional neural network(CNN),FLIGHT is also applicable to other learning models.

  • Lin Hu, Zhibin Wang, Hongbin Zhu, Yong Zhou
    Journal of Communications and Information Networks. 2022, 7(2): 145-156. https://doi.org/10.23919/JCIN.2022.9815198
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    In this paper, we propose a reconfigurable intelligent surface (RIS) assisted over-the-air federated learning (FL), where multiple antennas are deployed at each edge device to enable simultaneous multidimensional model transmission over a millimeter wave (mmWave) network. We conduct rigorous convergence analysis for the proposed FL system,taking into account dynamic channel fading and analog transmissions. Inspired by the convergence analysis,we propose to jointly optimize the receive digital and analog beamforming matrices at the access point, the RIS phase-shift matrix, as well as the transmit beamforming matrices at transmitting devices to minimize the transmission distortion. The optimization variable coupling and non-convex constraints make the formulated problem challenging to be solved. To this end, we develop a low-complexity Riemannian conjugate gradient (RCG)-based algorithm to solve the unit modulus constraints and decouple the optimization variables. Simulations show that the proposed RCG algorithm outperforms the successive convex approximation algorithm in terms of the learning performance.

  • Bo Li, Ting Wang, Peng Yang, Mingsong Chen, Mounir Hamdi
    Journal of Communications and Information Networks. 2022, 7(2): 157-169. https://doi.org/10.23919/JCIN.2022.9815199
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    To support the needs of ever-growing cloudbased services, the number of servers and network devices in data centers is increasing exponentially,which in turn results in high complexities and difficulties in network optimization. Machine learning (ML) provides an effective way to deal with these challenges by enabling network intelligence. To this end, numerous creative ML-based approaches have been put forward in recent years. Nevertheless, the intelligent optimization of data center networks (DCN) still faces enormous challenges. To the best of our knowledge,there is a lack of systematic and original investigations with in-depth analysis on intelligent DCN.To this end,in this paper,we investigate the application of ML to DCN optimization and provide a general overview and in-depth analysis of the recent works, covering flow prediction, flow classification, and resource management. Moreover, we also give unique insights into the technology evolution of the fusion of DCN and ML,together with some challenges and future research opportunities.

  • Research papers
  • Kaiwen Yu, Gang Wu, Shaoqian Li, GeoffreyYe Li
    Journal of Communications and Information Networks. 2022, 7(2): 170-180. https://doi.org/10.23919/JCIN.2022.9815200
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    With affordable overhead on information exchange, energy-efficient beamforming has potential to achieve both low power consumption and high spectral efficiency. This paper formulates the problem of joint beamforming and power allocation for a multiple-input single-output (MISO) multi-cell network with local observations by taking the energy efficiency into account. To reduce the complexity of joint processing of received signals in presence of a large number of base station (BS), a new distributed framework is proposed for beamforming with multi-cell cooperation or competition. The optimization problem is modeled as a partially observable Markov decision process (POMDP) and is solved by a distributed multi-agent self-decision beamforming(DMAB)algorithm based on the distributed deep recurrent Q-network (D2RQN). Furthermore, limited-information exchange scheme is designed for the inter-cell cooperation to boost the global performance. The proposed learning architecture, with considerably less information exchange, is effective and scalable for a high-dimensional problem with increasing BSs. Also,the proposed DMAB algorithms outperform distributed deep Q-network(DQN)based methods and non-learning based methods with significant performance improvement.

  • BrouBernard Ehui, Yiran Han, Hua Guo, Jianwei Liu
    Journal of Communications and Information Networks. 2022, 7(2): 181-191. https://doi.org/10.23919/JCIN.2022.9815201
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    Due to the resource-constrained of Internet of things (IoT) devices, the traditional cryptography protocols are not suitable for IoT environments. When they can be implemented, their performances often are not acceptable. As a result, a lightweight protocol is required to cope with these challenges. To address security challenges in IoT networks, we present a lightweight mutual authentication protocol for IoT.The protocol aims to provide a secure mutual authentication mechanisms between the sensor node and gateway using a lightweight cryptography algorithms. The protocol is relied on two main shared secret keys, a permanent key (kp) used for encrypting messages during the mutual authentication phase and an update key(ku)used for the communication session. The session key is constantly updated after a pre-defined session time(sesstimei)by using the previous session information. We used a lightweight cryptography mechanisms that includes symmetric-key cryptography, hash-based message authentication code (HMAC), and hash function to design the protocol. We analyze the protocol using the Barrows-Abadi-Needham (BAN)-logic method and the results show that the proposed scheme has good security and performance compared to existing related protocols. It can provide a secure mutual authentication mechanism in the IoT environment.

  • Chiya Zhang, Shiyuan Liang, Chunlong He, Kezhi Wang
    Journal of Communications and Information Networks. 2022, 7(2): 192-201. https://doi.org/10.23919/JCIN.2022.9815202
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    In this paper,multi-unmanned aerial vehicle (multi-UAV) and multi-user system are studied, where UAVs are served as aerial base stations (BS) for ground users in the same frequency band without knowing the locations and channel parameters for the users. We aim to maximize the total throughput for all the users and meet the fairness requirement by optimizing the UAVs’ trajectories and transmission power in a centralized way. This problem is non-convex and very difficult to solve,as the locations of the user are unknown to the UAVs. We propose a deep reinforcement learning(DRL)-based solution,i.e.,soft actor-critic(SAC)to address it via modeling the problem as a Markov decision process (MDP). We carefully design the reward function that combines sparse with non-sparse reward to achieve the balance between exploitation and exploration.The simulation results show that the proposed SAC has a very good performance in terms of both training and testing.

  • Dingke Yu, Xin Wang, Wenwei Fang, Zixian Ma, Bing Lan, Chunyi Song, Zhiwei Xu
    Journal of Communications and Information Networks. 2022, 7(2): 202-213. https://doi.org/10.23919/JCIN.2022.9815203
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    The direction of arrival (DOA) is approximated by first-order Taylor expansion in most of the existing methods, which will lead to limited estimation accuracy when using coarse mesh owing to the off-grid error. In this paper,a new root sparse Bayesian learning based DOA estimation method robust to gain-phase error is proposed, which dynamically adjusts the grid angle under coarse grid spacing to compensate the off-grid error and applies the expectation maximization (EM) method to solve the respective iterative formula-based on the prior distribution of each parameter. Simulation results verify that the proposed method reduces the computational complexity through coarse grid sampling while maintaining a reasonable accuracy under gain and phase errors,as compared to the existing methods.

  • Yongli An, Jingjing Yue, Lei Chen, Zhanlin Ji
    Journal of Communications and Information Networks. 2022, 7(2): 214-220. https://doi.org/10.23919/JCIN.2022.9815204
    Abstract ( ) Download PDF ( ) HTML ( )   Knowledge map   Save

    In the one-bit massive multiple-input multiple-output (MIMO) channel scenario, the accurate channel estimation becomes more difficult because the signals received by the low-resolution analog-to-digital converters(ADC)are quantized and affected by channel noise. Therefore, a one-bit massive MIMO channel estimation method is proposed in this paper. The channel matrix is regarded as a two-dimensional image. In order to enhance the significance of noise features in the image and remove them, the channel attention mechanism is introduced into the conditional generative adversarial network (CGAN) to generate channel images, and im-prove the loss function. The simulation results show that the improved network can use a smaller number of pilots to obtain better channel estimation results. Under the same number of pilots and signal-to-noise ratio(SNR),the channel estimation accuracy can be improved by about 7.5 dB,and can adapt to the scenarios with more antennas.