The rapid evolution of wireless technologies and the growing complexity of network infrastructures necessitate a paradigm shift in how communication networks are designed,configured,and managed. Recent advancements in large language models (LLMs) have sparked interest in their potential to revolutionize wireless communication systems. However, existing studies on LLMs for wireless systems are limited to a direct application for telecom language understanding. To empower LLMs with knowledge and expertise in the wireless domain, this paper proposes WirelessLLM, a comprehensive framework for adapting and enhancing LLMs to address the unique challenges and requirements of wireless communication networks. We first identify three foundational principles that underpin WirelessLLM:knowledge alignment, knowledge fusion, and knowledge evolution. Then,we investigate the enabling technologies to build WirelessLLM, including prompt engineering, retrieval augmented generation, tool usage, multi-modal pre-training, and domain-specific fine-tuning. Moreover, we present three case studies to demonstrate the practical applicability and benefits of WirelessLLM for solving typical problems in wireless networks. Finally, we conclude this paper by highlighting key challenges and outlining potential avenues for future research.
Channel prediction is an effective approach for reducing the feedback or estimation overhead in massive multi-input multi-output (m-MIMO) systems. However, existing channel prediction methods lack precision due to model mismatch errors or network generalization issues. Large language models (LLMs) have demonstrated powerful modeling and generalization abilities, and have been successfully applied to cross-modal tasks, including the time series analysis. Leveraging the expressive power of LLMs, we propose a pre-trained LLM-empowered channel prediction(LLM4CP)method to predict the future downlink channel state information (CSI) sequence based on the historical uplink CSI sequence. We fine-tune the network while freezing most of the parameters of the pre-trained LLM for better cross-modality knowledge transfer. To bridge the gap between the channel data and the feature space of the LLM,preprocessor, embedding, and output modules are specifically tailored by taking into account unique channel characteristics. Simulations validate that the proposed method achieves state-of-the-art (SOTA) prediction performance on full-sample, few-shot, and generalization tests with low training and inference costs.
Over-the-air computation(AirComp)has recently emerged as a promising multiple-access technique for fast wireless data aggregation(WDA)from distributed wireless devices(WDs). This paper investigates an energy harvesting (EH) AirComp system, in which multiple EH-powered single-antenna WDs simultaneously send wireless signals to a single-antenna access point (AP) with conventional energy supply for WDA via AirComp. Under this setup, we minimize the average computation mean square error(MSE)over a particular time period, by jointly optimizing the transmit energy allocation at the WDs and the AirComp denoising factors at the AP over time, subject to the energy causality constraints at individual WDs. First, we consider the offline scenario by assuming that the energy state information(ESI)and channel state information (CSI) are non-causally known at the beginning of the period, in which the formulated average MSE minimization corresponds to a non-convex optimization problem. We present a high-quality converged solution by using the techniques of alternating optimization and convex optimization. It is shown that for each WD,if the EH rate is sufficiently high,then the channel inversion power allocation is adopted; while if the EH rate is low, then all the harvested energy should be used up for transmission with proper energy allocation over time. Next, we consider the online scenario with causal ESI and CSI,in which the MSE minimization becomes a stochastic optimization problem.In this scenario, we present an offline-inspired online algorithm to obtain efficient online energy allocation designs by utilizing the obtained offline solutions. Finally,numerical results show that the proposed designs significantly outperform two benchmark schemes with power-halving and full-power transmission,respectively.
Unmanned aerial vehicle (UAV)-based edge computing is an emerging technology that provides fast task processing for a wider area. To address the issues of limited computation resource of a single UAV and finite communication resource in multi-UAV networks, this paper joints consideration of task offloading and wireless channel allocation on a collaborative multi-UAV computing network, where a high altitude platform station (HAPS)is adopted as the relay device for communication between UAV clusters consisting of UAV cluster heads (ch-UAVs) and mission UAVs (m-UAVs). We propose an algorithm, jointing task offloading and wireless channel allocation to maximize the average service success rate (ASSR)of a period time. In particular,the simulated annealing(SA)algorithm with random perturbations is used for optimal channel allocation,aiming to reduce interference and minimize transmission delay.A multi-agent deep deterministic policy gradient (MADDPG) is proposed to get the best task offloading strategy. Simulation results demonstrate the effectiveness of the SA algorithm in channel allocation. Meanwhile,when jointly considering computation and channel resources,the proposed scheme effectively enhances the ASSR in comparison to other benchmark algorithms.
Millimeter-wave (mmWave) is capable of achieving gigabit/second communication capacity and centimeter-level sensing accuracy and has become one of the main frequency bands for integrated sensing and communications (ISAC) research. Hybrid beamforming techniques have attracted much attention for solving the high path loss of mmWave and further reducing the hardware cost of the system. However, the related studies based on multicarrier and fully-connected hybrid architectures are still limited. For this reason,this paper investigates the orthogonal frequency division multiplexing (OFDM) based mmWave fully-connected hybrid architecture ISAC system to form a stable communication beam and dynamically varying sensing beam. In order to realize the aforementioned multifunctional beams, the hybrid beamformer design problem based on weighted error minimization of multicarrier is proposed and solved efficiently using the penalty dual decomposition (PDD) algorithm. Meanwhile, based on the echo model, the multicarrier multiple signal classification (MUSIC) algorithm for target angle of arrival estimation and the two-dimensional discrete Fourier transform(2D-DFT)algorithm for distance and velocity estimation are proposed, respectively. Numerical simulation results show that by adjusting the weighting factor,a flexible trade-off can be formed between the communication spectrum efficiency and the sensing accuracy error.
It is widely recognized that the future wireless networks are able to efficiently slice heterogeneous resources to provide customized services for various use cases. However, it is challenging to meet the diverse requirements of ever-growing applications, especially the stringent requirements of numerous delay-sensitive and/or computation-intensive applications. To tackle this challenge, we should not only consider user admission control to cope with resource limitations, but also make resource management more intelligent and flexible to meet diverse service needs. Taking advantages of mobile edge computing(MEC)and network slicing, in this paper, we propose deep edge slicing(DES),to jointly optimize user admission control and resource scheduling with the aim of minimizing the system cost while guaranteeing multitudinous quality-of-service (QoS) requirements. Specifically, we first apply a deep reinforcement learning approach to select the optimal set of access users with different service requests for maximizing resource utilization.Then a deep learning algorithm is employed to predict traffic data for allocating the communication and computing resources to different slices in advance. Finally, we realize the dynamic scheduling of heterogeneous resources by solving the optimization problem of minimizing the system cost. Simulation results demonstrate that DES can greatly reduce the system cost compared to other benchmarks.