全文下载排行

  • 一年内发表的文章
  • 两年内
  • 三年内
  • 全部
  • 最近1个月下载排行
  • 最近1年下载排行

Please wait a minute...
  • 全选
    |
  • 通信与信息网络学报. 2022, 7(2): 214-220. https://doi.org/10.23919/JCIN.2022.9815204
    摘要 (96) PDF全文 (200) HTML (46)   可视化   收藏

    In the one-bit massive multiple-input multiple-output (MIMO) channel scenario, the accurate channel estimation becomes more difficult because the signals received by the low-resolution analog-to-digital converters(ADC)are quantized and affected by channel noise. Therefore, a one-bit massive MIMO channel estimation method is proposed in this paper. The channel matrix is regarded as a two-dimensional image. In order to enhance the significance of noise features in the image and remove them, the channel attention mechanism is introduced into the conditional generative adversarial network (CGAN) to generate channel images, and im-prove the loss function. The simulation results show that the improved network can use a smaller number of pilots to obtain better channel estimation results. Under the same number of pilots and signal-to-noise ratio(SNR),the channel estimation accuracy can be improved by about 7.5 dB,and can adapt to the scenarios with more antennas.

  • Journal of Communications and Information Networks. 2022, 7(2): 145-156. https://doi.org/10.23919/JCIN.2022.9815198
    摘要 (131) PDF全文 (180) HTML (52)   可视化   收藏
    CSCD(1)

    In this paper, we propose a reconfigurable intelligent surface (RIS) assisted over-the-air federated learning (FL), where multiple antennas are deployed at each edge device to enable simultaneous multidimensional model transmission over a millimeter wave (mmWave) network. We conduct rigorous convergence analysis for the proposed FL system,taking into account dynamic channel fading and analog transmissions. Inspired by the convergence analysis,we propose to jointly optimize the receive digital and analog beamforming matrices at the access point, the RIS phase-shift matrix, as well as the transmit beamforming matrices at transmitting devices to minimize the transmission distortion. The optimization variable coupling and non-convex constraints make the formulated problem challenging to be solved. To this end, we develop a low-complexity Riemannian conjugate gradient (RCG)-based algorithm to solve the unit modulus constraints and decouple the optimization variables. Simulations show that the proposed RCG algorithm outperforms the successive convex approximation algorithm in terms of the learning performance.

  • Journal of Communications and Information Networks. 2022, 7(2): 107-121. https://doi.org/10.23919/JCIN.2022.9815195
    摘要 (499) PDF全文 (162) HTML (196)   可视化   收藏
    CSCD(1)

    Sixth generation (6G) enabled edge intelligence opens up a new era of Internet of everything and makes it possible to interconnect people-devices-cloud anytime, anywhere. More and more next-generation wireless network smart service applications are changing our way of life and improving our quality of life. As the hottest new form of next-generation Internet applications, Metaverse is striving to connect billions of users and create a shared world where virtual and reality merge. However, limited by resources, computing power, and sensory devices,Metaverse is still far from realizing its full vision of immersion,materialization,and interoperability. To this end,this survey aims to realize this vision through the organic integration of 6G-enabled edge artificial intelligence(AI)and Metaverse.Specifically,we first introduce three new types of edge-Metaverse architectures that use 6G-enabled edge AI to solve resource and computing constraints in Metaverse. Then we summarize technical challenges that these architectures face in Metaverse and the existing solutions. Furthermore, we explore how the edge-Metaverse architecture technology helps Metaverse to interact and share digital data. Finally, we discuss future research directions to realize the true vision of Metaverse with 6G-enabled edge AI.

  • 通信与信息网络学报. 2023, 8(1): 1-12. https://doi.org/10.23919/JCIN.2023.10087243
    摘要 (142) PDF全文 (143) HTML (97)   可视化   收藏

    The resource allocation of the federated learning(FL)for unmanned aerial vehicle(UAV)swarm systems are investigated. The UAV swarms based on FL realize the artificial intelligence (AI) applications by means of distributed training on the basis of ensuring the security of private data. However, the direct application of the FL in UAV swarms will incur high overhead. Therefore, in this article, we consider the resource allocation problem in FL for UAV swarms. To avoid the high communication overhead between UAVs and the central server, we proposed an FL framework for UAV swarms based on mobile edge computing(MEC)in which model aggregation is migrated to edge servers. In the proposed framework, the total cost of the FL is defined as the weighted sum of the total delay of UAV swarms to complete the FL and system energy consumption. In order to minimize the total cost of FL, we propose a resource allocation algorithm for joint optimization of computing resources and multi-UAV association based on deep reinforcement learning (DRL). The simulation result shows that: 1) compared with the benchmark algorithm,the proposed algorithm can effectively reduce the total cost of FL;2)the proposed algorithm can realize the trade-off between task completion delay and system energy consumption through weight changes.

  • 通信与信息网络学报. 2022, 7(4): 375-382. https://doi.org/10.23919/JCIN.2022.10005215
    摘要 (134) PDF全文 (133) HTML (56)   可视化   收藏

    Nowadays, the emerging paradigm of semantic communications seems to offer an attractive opportunity to improve the transmission reliability and efficiency in new generation communication systems. In particular, focusing on spectrum scarcity, expected to afflict the upcoming sixth generation(6G)networks,this paper analyses the semantic communications behavior in the context of a cell-dense scenario, in which users belonging to different small base station areas may be allocated on a same channel giving rise to a non-negligible interference that severely affects the communications reliability. In such a context, artificial intelligence methodologies are of paramount importance in order to speed up the switch from traditional communication to the novel semantic communication paradigm. As a consequence, a deep-convolution neural networks based encoder-decoder architecture has been exploited here in the definition of the proposed semantic communications framework.Finally,extensive numerical simulations have been performed to test the advantages of the proposed framework in different interfering scenarios and in comparison with different traditional or semantic alternatives.

  • 通信与信息网络学报. 2022, 7(4): 447-456. https://doi.org/10.23919/JCIN.2022.10005221
    摘要 (129) PDF全文 (130) HTML (97)   可视化   收藏

    Integrated sensing and communication (ISAC)is a spectrum and energy efficient approach to realizing dual functions by a unified hardware platform. In this paper, we consider a multiple-input multiple-output (MIMO) ISAC system, where the transmitted waveform consisting of communication signals and dedicated sensing signal is optimized for dual purposes of estimating targets and serving downlink single-antenna users. Specifically, the sensing interference and multi-user interference are exploited,rather than suppressed,by the waveform design scheme.The joint waveform design problem is formulated by maximizing the constructive interference (CI) while ensuring the power budget and waveform similarity error with the benchmark signal, which limits the sensing estimation accuracy. To obtain the benchmark signal which achieves the optimal estimation performance, we propose a semidefinite relaxation based algorithm to solve the optimization problem. For clarity, we derive the real representation of the complex joint waveform design problem and prove its convexity.Numerical results verify the superiority of the proposed CI-based waveform design when the interference was efficiently exploited as a useful signal source achieving favorable symbol error ratio performance.Moreover,the dedicated sensing signal provides more degree of freedom for waveform design.

  • 通信与信息网络学报. 2022, 7(2): 181-191. https://doi.org/10.23919/JCIN.2022.9815201
    摘要 (104) PDF全文 (125) HTML (33)   可视化   收藏

    Due to the resource-constrained of Internet of things (IoT) devices, the traditional cryptography protocols are not suitable for IoT environments. When they can be implemented, their performances often are not acceptable. As a result, a lightweight protocol is required to cope with these challenges. To address security challenges in IoT networks, we present a lightweight mutual authentication protocol for IoT.The protocol aims to provide a secure mutual authentication mechanisms between the sensor node and gateway using a lightweight cryptography algorithms. The protocol is relied on two main shared secret keys, a permanent key (kp) used for encrypting messages during the mutual authentication phase and an update key(ku)used for the communication session. The session key is constantly updated after a pre-defined session time(sesstimei)by using the previous session information. We used a lightweight cryptography mechanisms that includes symmetric-key cryptography, hash-based message authentication code (HMAC), and hash function to design the protocol. We analyze the protocol using the Barrows-Abadi-Needham (BAN)-logic method and the results show that the proposed scheme has good security and performance compared to existing related protocols. It can provide a secure mutual authentication mechanism in the IoT environment.

  • Journal of Communications and Information Networks. 2023, 8(4): 303-318. https://doi.org/10.23919/JCIN.2023.10272357
    摘要 (89) PDF全文 (124) HTML (18)   可视化   收藏

    This paper studies the fundamental limit of semantic communications over the discrete memoryless channel. We consider the scenario to send a semantic source consisting of an observation state and its corresponding semantic state, both of which are recovered at the receiver. To derive the performance limitation, we adopt the semantic rate-distortion function (SRDF) to study the relationship among the minimum compression rate, observation distortion, semantic distortion, and channel capacity. For the case with unknown semantic source distribution, while only a set of the source samples is available, we propose a neural-network-based method by leveraging the generative networks to learn the semantic source distribution. Furthermore, for a special case where the semantic state is a deterministic function of the observation, we design a cascade neural network to estimate the SRDF.For the case with perfectly known semantic source distribution,we propose a general Blahut-Arimoto(BA)algorithm to effectively compute the SRDF.Finally,experimental results validate our proposed algorithms for the scenarios with ideal Gaussian semantic source and some practical datasets.

  • 通信与信息网络学报. 2022, 7(2): 170-180. https://doi.org/10.23919/JCIN.2022.9815200
    摘要 (175) PDF全文 (115) HTML (56)   可视化   收藏

    With affordable overhead on information exchange, energy-efficient beamforming has potential to achieve both low power consumption and high spectral efficiency. This paper formulates the problem of joint beamforming and power allocation for a multiple-input single-output (MISO) multi-cell network with local observations by taking the energy efficiency into account. To reduce the complexity of joint processing of received signals in presence of a large number of base station (BS), a new distributed framework is proposed for beamforming with multi-cell cooperation or competition. The optimization problem is modeled as a partially observable Markov decision process (POMDP) and is solved by a distributed multi-agent self-decision beamforming(DMAB)algorithm based on the distributed deep recurrent Q-network (D2RQN). Furthermore, limited-information exchange scheme is designed for the inter-cell cooperation to boost the global performance. The proposed learning architecture, with considerably less information exchange, is effective and scalable for a high-dimensional problem with increasing BSs. Also,the proposed DMAB algorithms outperform distributed deep Q-network(DQN)based methods and non-learning based methods with significant performance improvement.

  • Journal of Communications and Information Networks. 2023, 8(3): 203-211. https://doi.org/10.23919/JCIN.2023.10272348
    摘要 (118) PDF全文 (107) HTML (6)   可视化   收藏

    Radio map is an advanced technology that mitigates the reliance of multiple-input multiple-output (MIMO) beamforming on channel state information (CSI). In this paper, we introduce the concept of deep learning-based radio map,which is designed to be generated directly from the raw CSI data. In accordance with the conventional CSI acquisition mechanism of MIMO,we first introduce two baseline schemes of radio map,i.e.,CSI prediction-based radio map and throughput predictionbased radio map.To fully leverage the powerful inference capability of deep neural networks, we further propose the end-to-end structure that outputs the beamforming vector directly from the location information. The rationale behind the proposed end-to-end structure is to design the neural network using a task-oriented approach,which is achieved by customizing the loss function that quantifies the communication quality. Numerical results show the superiority of the task-oriented design and confirm the potential of deep learning-based radio map in replacing CSI with location information.

  • 通信与信息网络学报. 2022, 7(3): 278-295. https://doi.org/10.23919/JCIN.2022.9906942
    摘要 (74) PDF全文 (97) HTML (34)   可视化   收藏

    Fifth generation (5G) cellular networks intend to overcome the challenging demands posed by dynamic service quality requirements, which are not achieved by single network technology. The future cellular networks require efficient resource allocation and power control schemes that meet throughput and energy efficiency requirements when multiple technologies coexist and share network resources. In this paper, we optimize the throughput and energy efficiency (EE) performance for the coexistence of two technologies that have been identified for the future cellular networks, namely, massive multiple-input multiple-output (MIMO) and network-assisted device-to-device(D2D)communications. In such a hybrid network, the co/cross-tier interferences between cellular and D2D communications caused by spectrum sharing is a significant challenge. To this end, we formulate the average sum rate and EE optimization problem as mixed-integer non-linear programming (MINLP). We develop distributed resource allocation algorithms based on matching theory to alleviate interferences and optimize network performance. It is shown in this paper that the proposed algorithms converge to a stable matching and terminate after finite iterations.Matlab simulation results show that the proposed algorithms achieved more than 88% of the average transmission rate and 86% of the energy efficiency performance of the optimal matching with lower complexity.

  • 通信与信息网络学报. 2023, 8(1): 24-36. https://doi.org/10.23919/JCIN.2023.10087245
    摘要 (75) PDF全文 (94) HTML (46)   可视化   收藏

    With the expansion of satellite constellation, routing techniques for small-scale satellite networks have problems in routing overhead and forwarding efficiency. This paper proposes a vector segment routing method for large-scale multi layer satellite networks. A vector forwarding path is built based on the location between the source and the destination. Data packets are forwarded along this vector path, shielding the influence of satellite motion on routing forwarding. Then, a dynamic route maintenance strategy is suggested. In a multi layer satellite network, the low-orbit satellites are in charge of computing the routing tables for one area,and the routing paths are dynamically adjusted in the area in accordance with the network. The medium-orbit satellites maintain the connectivity of vector paths in multiple segmented areas. The forwarding mode based on the source and destination location improves the forwarding efficiency, and the segmented route maintenance mode decreases the routing overhead. The simulation results indicate that vector segment routing has significant performance advantages in end-to-end delay, packet loss rate, and throughput in a multi layer satellite network. We also simulate the impact of routing table update mechanism on network performance and overhead and give the performance of segmented vector routing in multi layer low-orbit satellite networks.

  • 通信与信息网络学报. 2022, 7(3): 252-258. https://doi.org/10.23919/JCIN.2022.9906939
    摘要 (114) PDF全文 (93) HTML (36)   可视化   收藏

    This paper investigates the reliability problem of airborne free-space optical(FSO)communications, and a hybrid FSO/radio frequency (RF) communication system with parallel transmission is proposed, where the data stream is transmitted over both FSO and RF links simultaneously. Further,to combat channel fading, maximal ratio combining is utilized at the receiver for combining received signals from both links. The performances of the proposed system are analytically derived in terms of the outage probability and the average bite-error rate (BER). Numerical results show that the proposed hybrid FSO/RF system with parallel transmission outperforms a single airborne FSO or a single RF link, which provides technical guidance for designing reliable high-speed airborne communication systems.

  • 通信与信息网络学报. 2022, 7(4): 383-393. https://doi.org/10.23919/JCIN.2022.10005216
    摘要 (188) PDF全文 (90) HTML (75)   可视化   收藏

    In this paper, an indoor layout sensing and localization system with testbed in the 60-GHz millimeter wave(mmWave)band, named mmReality, is elaborated. The mmReality system consists of one transmitter and one mobile receiver,both with a phased array and a single radio frequency (RF) chain. To reconstruct the room layout, the pilot signal is delivered from the transmitter to the receiver via different pairs of transmission and receiving beams, so that multipath signals in all directions can be captured. Then spatial smoothing and the two-dimensional multiple signal classification (MUSIC) algorithm are applied to detect the angle-of-departures (AoDs) and angle-of-arrivals (AoAs) of propagation paths. Moreover, the technique of multi-carrier ranging is adopted to measure the path lengths. Therefore, with the measurements of the receiver in different locations of the room,the receiver and virtual transmitters can be pinpointed to reconstruct the room layout. Experiments show that the reconstructed room layout can be utilized to localize a mobile device via the AoA spectrum.

  • 通信与信息网络学报. 2022, 7(3): 296-308. https://doi.org/10.23919/JCIN.2022.9906943
    摘要 (132) PDF全文 (88) HTML (46)   可视化   收藏

    Array imperfections will lead to serious performance degradation of the deep neural network(DNN) based direction of arrival (DOA) estimation in the low earth orbit (LEO) satellite communication by producing a mismatch between inference data and training data. In this paper,we propose a lightweight deep learning-based algorithm for array imperfection correction and DOA estimation. By preprocessing the covariance matrix of the array antenna outputs to the image, the array imperfection correction and DOA estimation problems are correspondingly converted into the image-to-image transformation task and image recognition task. Furthermore, for the deployment of real-time DNN-based DOA estimation on the resource-constrained edge system, generative adversarial network(GAN)model compression is applied to obtain a lightweight student generator of Pix2Pix for array imperfection correction. The Mobilenet-V2 is then used to extract the DOA information from the covariance matrix image. Simulations results demonstrate that the DOA estimation performance is significantly improved through the array imperfection correction. The proposed algorithm also better satisfies the real-time demand with decreased inference time on the resource-constrained edge system.

  • 通信与信息网络学报. 2022, 7(2): 192-201. https://doi.org/10.23919/JCIN.2022.9815202
    摘要 (49) PDF全文 (87) HTML (10)   可视化   收藏

    In this paper,multi-unmanned aerial vehicle (multi-UAV) and multi-user system are studied, where UAVs are served as aerial base stations (BS) for ground users in the same frequency band without knowing the locations and channel parameters for the users. We aim to maximize the total throughput for all the users and meet the fairness requirement by optimizing the UAVs’ trajectories and transmission power in a centralized way. This problem is non-convex and very difficult to solve,as the locations of the user are unknown to the UAVs. We propose a deep reinforcement learning(DRL)-based solution,i.e.,soft actor-critic(SAC)to address it via modeling the problem as a Markov decision process (MDP). We carefully design the reward function that combines sparse with non-sparse reward to achieve the balance between exploitation and exploration.The simulation results show that the proposed SAC has a very good performance in terms of both training and testing.

  • 通信与信息网络学报. 2023, 8(1): 90-98. https://doi.org/10.23919/JCIN.2023.10087251
    摘要 (57) PDF全文 (85) HTML (66)   可视化   收藏

    Trajectory privacy protection schemes based on suppression strategies rarely take geospatial constraints into account, which is made more likely for an attacker to determine the user’s true sensitive location and trajectory. To solve this problem,this paper presents a privacy budget allocation method based on privacy security level(PSL).Firstly, in a custom map, the idea of P-series is contributed to allocate a given total privacy budget reasonably to the initially sensitive locations.Then, the size of privacy security level for sensitive locations is dynamically adjusted by comparing it with the customized initial level threshold parameter μ. Finally, the privacy budget of the initial sensitive location is allocated to its neighbors based on the relationship between distance and degree between nodes. By comparing the PSL algorithm with the traditional allocation methods, the results show that it is more flexible to allocate a privacy budget without compromising location privacy under the same preset conditions.

  • 通信与信息网络学报. 2023, 8(1): 13-23. https://doi.org/10.23919/JCIN.2023.10087244
    摘要 (68) PDF全文 (82) HTML (34)   可视化   收藏

    Anomaly detection is an essential part of any practical system in order to remedy any malfunction and accident early to create a secure and robust system. Malicious users and malfunctioning cognitive radio(CR) devices may cause severe interference to legitimate users. However,there are no effective methods to detect spontaneous and irregular anomaly behaviors in sub-sampling data stream from wideband compressive spectrum sensing as an important function of a CR device.In this article,to detect anomaly utilization of spectrum from sub-sampled data stream, a multiple layer perceptron/feed-forward neural network(FFNN)based solution is proposed. The proposed solution would learn the pattern of legitimate and anomalous usages autonomously without expert’s knowledge. The proposed neural network (NN) framework has also shown benefits such as more than 80% faster detection speed and lower detection error rate.

  • Journal of Communications and Information Networks. 2023, 8(3): 239-255. https://doi.org/10.23919/JCIN.2023.10272352
    摘要 (128) PDF全文 (81) HTML (6)   可视化   收藏

    Telecommunication has undergone significant transformations due to the continuous advancements in internet technology, mobile devices, competitive pricing,and changing customer preferences. Specifically,the most recent iteration of OpenAI’s large language model chat generative pre-trained transformer (ChatGPT) has the potential to propel innovation and bolster operational performance in the telecommunications sector.Nowadays, the exploration of network resource management,control, and operation is still in the initial stage. In this paper,we propose a novel network artificial intelligence architecture named language model for network traffic (NetLM), a large language model based on a transformer designed to understand sequence structures in the network packet data and capture their underlying dynamics. The continual convergence of knowledge space and artificial intelligence (AI) technologies constitutes the core of intelligent network management and control. Multi-modal representation learning is used to unify the multi-modal information of network indicator data, traffic data, and text data into the same feature space. Furthermore, a NetLM-based control policy generation framework is proposed to refine intent incrementally through different abstraction levels. Finally, some potential cases are provided that NetLM can benefit the telecom industry.

  • Journal of Communications and Information Networks. 2023, 8(3): 187-202. https://doi.org/10.23919/JCIN.2023.10272347
    摘要 (107) PDF全文 (79) HTML (25)   可视化   收藏

    With the emerging applications of the Internet of things, artificial intelligence, and satellite communications, the future network will be featured as the Internet of everything around the globe. The network heterogeneity, applications cloudification, and personalized user services demand a revolutionary change in the network architecture. With the rapid development of cloud native technology,the new network should support heterogeneous networks and personalized quality of services for users. In this paper,we propose a Cybertwinbased cloud native network (CCNN) that merges the radio access network (RAN), the IP bearer network, and the data center network and is based on the cloud native data center network using Kubernetes as a network operating system for unified virtualization of computing, storage, and network resources, unified scheduling and allocation,and unified operation and management. Then, we propose a fully decoupled RAN architecture that can flexibly and efficiently utilize the resource for personlized user services. We also propose a Cybertwin-based management framework built on Kubernetes for integrated networking, computing and storage resource scheduling. Finally, we design an immunology-inspired intrinsic security architecture with zero trust security system and adaptive defense system. The proposed CCNN is a new network architecture expected to address future generation communications and networks challenges.

  • 通信与信息网络学报. 2022, 7(3): 333-348. https://doi.org/10.23919/JCIN.2022.9906946
    摘要 (66) PDF全文 (75) HTML (23)   可视化   收藏

    Using unmanned aerial vehicles (UAVs) to collect data in wireless sensor networks (WSNs) has advantages of controllable mobility and flexible deployment. However, there are potential challenges of energy limitation and data security which may limit such applications. To cope with these challenges, a complicated and intractable optimization problem is formulated, which maximizes the performance metric of secrecy energy efficiency (EE) subject to the constraints of secrecy rate, maximum power, and trajectory. Then, an energy-efficient and secure solution is developed to improve the secrecy EE of the UAV-enabled data collection in the WSNs by joint optimizing the UAV’s trajectory and velocity along with the sensors’power. The proposed solution is an iterative algorithm based on the optimization approaches of alternating optimization,successive convex approximation,and fractional programming. Simulation results demonstrate that the proposed solution scheme is effective for improving the secrecy EE while guaranteeing the data security.

  • 通信与信息网络学报. 2022, 7(3): 221-238. https://doi.org/10.23919/JCIN.2022.9906937
    摘要 (149) PDF全文 (74) HTML (39)   可视化   收藏

    In this paper,we design a resource management scheme to support stateful applications, which will be prevalent in sixth generation(6G)networks. Different from stateless applications, stateful applications require context data while executing computing tasks from user terminals(UTs). Using a multi-tier computing paradigm with servers deployed at the core network, gateways, and base stations to support stateful applications, we aim to optimize long-term resource reservation by jointly minimizing the usage of computing,storage,and communication resources and the cost of reconfiguring resource reservation. The coupling among different resources and the impact of UT mobility create challenges in resource management. To address the challenges, we develop digital twin(DT)empowered network planning with two elements, i.e., multi-resource reservation and resource reservation reconfiguration. First, DTs are designed for collecting UT status data, based on which UTs are grouped according to their mobility patterns. Second,an algorithm is proposed to customize resource reservation for different groups to satisfy their different resource demands. Last, a Meta-learning-based approach is developed to reconfigure resource reservation for balancing the network resource usage and the reconfiguration cost. Simulation results demonstrate that the proposed DTempowered network planning outperforms benchmark frameworks by using less resources and incurring lower reconfiguration costs.

  • 通信与信息网络学报. 2023, 8(1): 71-79. https://doi.org/10.23919/JCIN.2023.10087249
    摘要 (55) PDF全文 (71) HTML (58)   可视化   收藏

    In millimeter-wave multiple-input multipleoutput (MIMO) systems, transmit antenna selection (TAS) can be employed to reduce hardware complexity and energy consumption when the number of antennas becomes very large. However, the traditional exhaustive search TAS tries all possible antenna combinations which causes high computational complexity. It may limit its application in practice. The main advantage of machine learning(ML)lies in the capability of establishing underlying relations between system parameters and objective, hence being able to shift the computation burden of real-time processing to the offline training phase. Based on this advantage, introducing ML to TAS is a promising way to tackle the high computational complexity problem. Although the existing ML-based algorithms try to approach the optimal performance, there is still a large room for improvement. In this paper, considering the secure transmission of the system, we model the TAS problem as a multi-class classification problem and propose an efficient antenna selection algorithm based on gradient boosting decision tree (GBDT), in which we consider the system security capacity and computational complexity as the optimization objectives. On the one hand, the system security performance is improved because its achievable security capacity is close to the traditional exhaustive search algorithm. On the other hand,compared with the exhaustive search algorithm and existing ML-based algorithms, the training efficiency is significantly improved with the complexity O(N), where N is the number of transmitting antenna. In addition,the performance of the proposed algorithm is evaluated in mmWave MIMO system by employing New York University simulator (NYUSIM) model, which is based on the real channel measurement. Performance analysis show that the proposed GBDT-based scheme can effectively improve the system secrecy capacity and significantly reduce the computational complexity.

  • Journal of Communications and Information Networks. 2023, 8(3): 221-230. https://doi.org/10.23919/JCIN.2023.10272350
    摘要 (37) PDF全文 (70) HTML (3)   可视化   收藏

    In recent years,with the rapid development of Internet and hardware technologies, the number of Internet of things(IoT)devices has grown exponentially. However,IoT devices are constrained by power consumption,making the security of IoT vulnerable.Malware such as Botnets and Worms poses significant security threats to users and enterprises alike. Deep learning models have demonstrated strong performance in various tasks across different domains, leading to their application in malicious software detection. Nevertheless, due to the power constraints of IoT devices, the well-performanced large models are not suitable for IoT malware detection. In this paper we propose a malware detection method based on Markov images and MobileNet, offering a cost-effective, efficient, and high-performing solution for malware detection. Additionally, this paper innovatively analyzes the robustness of opcode sequences.

  • 通信与信息网络学报. 2022, 7(4): 408-420. https://doi.org/10.23919/JCIN.2022.10005218
    摘要 (48) PDF全文 (69) HTML (19)   可视化   收藏

    The ability to effortlessly construct and broadcast false messages makes IEEE 802.11 wireless networks particularly vulnerable to attack. False frame generation allows rogue devices to impersonate an authorized user and issue commands that impact the user’s network connection or possibly the entire network’s security. Unfortunately,the current device impersonation detection methods are unsuitable for small devices or real-time applications.Our contribution is to demonstrate that a rule-based learning classifier using several random forest(RF)features from an IEEE 802.11 frame can determine the probability that an impersonating device has generated that frame in real time.Our main innovation is a processing pipeline,and the algorithm that implements concurrent one-class classifiers on a per device basis yet is lightweight enough to run directly on a wireless access point(WAP)and produce real-time outputs.

  • 通信与信息网络学报. 2022, 7(4): 433-446. https://doi.org/10.23919/JCIN.2022.10005220
    摘要 (89) PDF全文 (67) HTML (58)   可视化   收藏

    For trapped users in disaster areas, the available energy of affected user equipment (UE) is limited due to the breakdown of the ground power system. When complex geographical condition prevents ground emergency vehicles from reaching disaster-stricken areas, unmanned aerial vehicle (UAV) can effectively work as a temporary aerial base station for serving terrestrial trapped users. Simultaneous wireless information and power transfer (SWIPT) system is intriguing for distributed batteryless users (BUs) by transferring data and energy simultaneously. However,how to achieve the maximum energy efficiency (EE) and energy transfer efficiency (ETE) for distributed BUs in UAV-enabled SWIPT systems is not very clear.In this paper,we develop three novel reconfigurable intelligent surface(RIS)-based SWIPT algorithms to solve this nonconvex joint optimization problem using deep reinforcement learning (RL) algorithms. Through the deployment of RIS-assisted UAVs,we aim to maximize the EE along with the ETE via jointly designing the UAV trajectory, the phase matrix, and the power splitting ratio within strict time and energy constraints.The obtained numerical results show that our developed RL-based algorithms can effectively improve the cost time,the average charging rate,data rate,and the EE/ETE performance of the RIS-assisted SWIPT systems as compared with benchmark solutions.

  • 通信与信息网络学报. 2022, 7(2): 202-213. https://doi.org/10.23919/JCIN.2022.9815203
    摘要 (122) PDF全文 (66) HTML (33)   可视化   收藏

    The direction of arrival (DOA) is approximated by first-order Taylor expansion in most of the existing methods, which will lead to limited estimation accuracy when using coarse mesh owing to the off-grid error. In this paper,a new root sparse Bayesian learning based DOA estimation method robust to gain-phase error is proposed, which dynamically adjusts the grid angle under coarse grid spacing to compensate the off-grid error and applies the expectation maximization (EM) method to solve the respective iterative formula-based on the prior distribution of each parameter. Simulation results verify that the proposed method reduces the computational complexity through coarse grid sampling while maintaining a reasonable accuracy under gain and phase errors,as compared to the existing methods.

  • 通信与信息网络学报. 2022, 7(3): 309-323. https://doi.org/10.23919/JCIN.2022.9906944
    摘要 (77) PDF全文 (65) HTML (19)   可视化   收藏

    The present paper proposes a secure design of the energy-efficient multi-modular exponential techniques that use store and reward method and store and forward method. Computation of the multi-modular exponentiation can be performed by three novel algorithms:store and reward, store and forward 1-bit (SFW1), and store and forward 2-bit (SFW2). Hardware realizations of the proposed algorithms are analyzed in terms of throughput and energy. The experimental results show the proposed algorithms SFW1 and SFW2 increase the throughput by orders of 3.98% and 4.82%, reducing the power by 5.32% and 6.15% and saving the energy in the order of 3.95% and 4.75%, respectively. The proposed techniques can prevent possible side-channel attacks and timing attacks as a consequence of an inbuilt confusion mechanism. Xilinx Vivado-21 on Virtex-7 evaluation board and integrated computer application for recognizing user services (ICARUS) Verilog simulation and synthesis tools are used for field programmable gate array (FPGA) for hardware realization. The hardware compatibility of proposed algorithms has also been checked using Cadence for application specific integrated circuit(ASIC).

  • 通信与信息网络学报. 2022, 7(4): 349-359. https://doi.org/10.23919/JCIN.2022.10005213
    摘要 (108) PDF全文 (63) HTML (50)   可视化   收藏

    Network slicing has gained popularity as a result of the advances in the fifth generation (5G) mobile network. Network slicing facilitates the support of different service types with varying requirements, which brings into light the slicing-aware next generation mobile network architecture. While allowing resource sharing among multiple stakeholders,there is a long list of administrative negotiations among parties that have not established mutual trust. Distributed ledger technology may be a solution to mitigate the above issues by taking its decentralized yet immutable and auditable ledger, which may help to ease administrative negotiations and build mutual trust among multi-stakeholders.There have been many research interests in this direction which focus on handling various problems in network slicing. This paper aims at constructing this area of knowledge by introducing network slice from a standardization point of view to start with, and presenting security, privacy, and trust challenges of network slicing in 5G and beyond networks. Furthermore, this paper covers distributed ledger technologies basics and related approaches that tackle security,privacy,and trust threats in network slicing for 5G and beyond networks. The various proposals proposed in the literature are compared and presented. Lastly, limitations of current work and open challenges are illustrated as well.

  • Journal of Communications and Information Networks. 2022, 7(2): 135-144. https://doi.org/10.23919/JCIN.2022.9815197
    摘要 (141) PDF全文 (60) HTML (25)   可视化   收藏

    In recent years, federated learning (FL) has played an important role in private data-sensitive scenarios to perform learning tasks collectively without data exchange. However, due to the centralized model aggregation for heterogeneous devices in FL, the last updated model after local training delays the convergence, which increases the economic cost and dampens clients’ motivations for participating in FL. In addition, with the rapid development and application of intelligent reflecting surface (IRS) in the next-generation wireless communication, IRS has proven to be one effective way to enhance the communication quality. In this paper,we propose a framework of federated learning with IRS for grouped heterogeneous training(FLIGHT)to reduce the latency caused by the heterogeneous communication and computation of the clients. Specifically, we formulate a cost function and a greedy-based grouping strategy, which divides the clients into several groups to accelerate the convergence of the FL model. The simulation results verify the effectiveness of FLIGHT for accelerating the convergence of FL with heterogeneous clients.Besides the exemplified linear regression (LR) model and convolutional neural network(CNN),FLIGHT is also applicable to other learning models.

  • Journal of Communications and Information Networks. 2022, 7(2): 157-169. https://doi.org/10.23919/JCIN.2022.9815199
    摘要 (125) PDF全文 (59) HTML (25)   可视化   收藏

    To support the needs of ever-growing cloudbased services, the number of servers and network devices in data centers is increasing exponentially,which in turn results in high complexities and difficulties in network optimization. Machine learning (ML) provides an effective way to deal with these challenges by enabling network intelligence. To this end, numerous creative ML-based approaches have been put forward in recent years. Nevertheless, the intelligent optimization of data center networks (DCN) still faces enormous challenges. To the best of our knowledge,there is a lack of systematic and original investigations with in-depth analysis on intelligent DCN.To this end,in this paper,we investigate the application of ML to DCN optimization and provide a general overview and in-depth analysis of the recent works, covering flow prediction, flow classification, and resource management. Moreover, we also give unique insights into the technology evolution of the fusion of DCN and ML,together with some challenges and future research opportunities.

  • 通信与信息网络学报. 2022, 7(3): 324-332. https://doi.org/10.23919/JCIN.2022.9906945
    摘要 (67) PDF全文 (58) HTML (22)   可视化   收藏

    For the problem of multiplexing multimodal vortex electromagnetic waves, a double-ring concentric uniform circular array(CUCA)consisting of 12 circularly polarized antennas (4 inner rings and 8 outer rings) is proposed in this paper. A complex feeding network is solved by rotating the circularly polarized antennas at a certain angle. The antennas are rotationally symmetric and point to the center, generating orbital angular momentum (OAM) waves by feeding the same amplitude and phase signals. In addition, this paper combines millimeter wave (mm-wave) and ultra-wideband (UWB) with OAM. The proposed antenna array can generate OAM beams at 30~40 GHz with l=-1,-2.When l=-1 the relative bandwidth is 25.2% and the gain is 8.03 dBi;when l=-2 the relative bandwidth is 27.7% and the gain is 9.43 dBi. The analysis of simulation results shows that the antenna array has UWB performance,good gain,and a standard spiral phase distribution, which can provide some practical significance for modal multiplexing of mm-wave band OAM.

  • 通信与信息网络学报. 2023, 8(1): 48-56. https://doi.org/10.23919/JCIN.2023.10087247
    摘要 (84) PDF全文 (54) HTML (31)   可视化   收藏

    The reconfigurable intelligent surface(RIS), which is composed of multiple passive reflective components,is now considered as an effective mean to improve security performance in wireless communications, as it can enhance the signal of legitimate users and suppress the power leakage at eavesdroppers by adjusting signal phases. In this paper,we maximize the downlink ergodic secrecy sum rate of a RIS-aided multi-user system over Rician fading channels, where we assume that only imperfect channel state information(CSI)is available at the base station(BS).Firstly,we obtain the deterministic approximate expression for the ergodic secrecy sum rate by resorting to the large-system approximation theory. Then the problem is formulated to maximize the downlink ergodic secrecy sum rate by optimizing the regularization coefficient of regularized zero-forcing (RZF) precoding and the phase-shifting matrix of the RIS. By using the particle swarm optimization (PSO) method, we propose an alternate optimization (AO) algorithm to solve this non-convex problem. Finally, the numerical simulations illustrate the accuracy of our large-system approximate expression as well as the effectiveness of the proposed algorithm.

  • Journal of Communications and Information Networks. 2023, 8(4): 319-328. https://doi.org/10.23919/JCIN.2023.10272358
    摘要 (40) PDF全文 (53) HTML (4)   可视化   收藏

    Deep learning enables real-time resource allocation for ultra-reliable and low-latency communications (URLLC), one of the major use cases in the next-generation cellular networks. Yet the high training complexity and weak generalization ability of neural networks impede the practical use of the learning-based methods in dynamic wireless environments. To overcome these obstacles, we propose a parameter generation network(PGN)to efficiently learn bandwidth and power allocation policies in URLLC. The PGN consists of two types of fully-connected neural networks (FNNs). One is a policy network, which is used to learn a resource allocation policy or a Lagrangian multiplier function. The other type of FNNs are hypernetworks, which are designed to learn the weight matrices and bias vectors of the policy network. Only the hypernetworks require training.Using the well-trained hypernetworks,the policy network is generated through forward propagation in the test phase. By introducing a simple data processing, the hypernetworks can well learn the weight matrices and bias vectors by inputting their indices, resulting in low training cost. Simulation results demonstrate that the learned bandwidth and power allocation policies by the PGNs perform very close to a numerical algorithm.Moreover,the PGNs can be well generalized to the number of users and wireless channels, and are with significantly lower memory costs,fewer training samples,and shorter training time than the traditional learning-based methods.

  • Journal of Communications and Information Networks. 2023, 8(4): 329-340. https://doi.org/10.23919/JCIN.2023.10272359
    摘要 (40) PDF全文 (53) HTML (4)   可视化   收藏

    Low-earth orbit (LEO) satellite networks ignite global wireless connectivity. However, signal outages and co-channel interference limit the coverage in traditional LEO satellite networks where a user is served by a single satellite. This paper explores the possibility of satellite cooperation in the downlink transmissions.Using tools from stochastic geometry, we model and analyze the downlink coverage of a typical user with satellite cooperation under Nakagami fading channels. Moreover, we derive the joint distance distribution of cooperative LEO satellites to the typical user.Our model incorporates fading channels, cooperation among several satellites, satellites’ density and altitude, and co-channel interference. Extensive Monte Carlo simulations are performed to validate analytical results. Simulation and numerical results suggest that coverage with LEO satellites cooperation considerably exceeds coverage without cooperation. Moreover,there are optimal satellite density and satellite altitude that maximize the coverage probability, which gives valuable network design insights.

  • 通信与信息网络学报. 2022, 7(4): 394-407. https://doi.org/10.23919/JCIN.2022.10005217
    摘要 (55) PDF全文 (52) HTML (19)   可视化   收藏

    The large-scale deployment of intelligent Internet of things (IoT) devices have brought increasing needs for computation support in wireless access networks. Applying machine learning (ML) algorithms at the network edge, i.e., edge learning, requires efficient training, in order to adapt themselves to the varying environment. However, the transmission of the training data collected by devices requires huge wireless resources. To address this issue, we exploit the fact that data samples have different importance for training, and use an influence function to represent the importance. Based on the importance metric,we propose a data pre-processing scheme combining data filtering that reduces the size of dataset and data compression that removes redundant information. As a result, the number of data samples as well as the size of every data sample to be transmitted can be substantially reduced while keeping the training accuracy.Furthermore,we propose device scheduling policies, including rate-based and Monte-Carlo-based policies, for multi-device multi-channel systems, maximizing the summation of data importance of scheduled devices. Experiments show that the proposed device scheduling policies bring more than 2% improvement in training accuracy.

  • Journal of Communications and Information Networks. 2022, 7(2): 122-134. https://doi.org/10.23919/JCIN.2022.9815196
    摘要 (112) PDF全文 (52) HTML (34)   可视化   收藏

    By leveraging the data sample diversity,the early-exit network recently emerges as a prominent neural network architecture to accelerate the deep learning inference process.However,intermediate classifiers of the early exits introduce additional computation overhead, which is unfavorable for resource-constrained edge artificial intelligence (AI). In this paper, we propose an early exit prediction mechanism to reduce the on-device computation overhead in a device-edge co-inference system supported by early-exit networks. Specifically,we design a low-complexity module, namely the exit predictor, to guide some distinctly“hard”samples to bypass the computation of the early exits. Besides, considering the varying communication bandwidth, we extend the early exit prediction mechanism for latency-aware edge inference, which adapts the prediction thresholds of the exit predictor and the confidence thresholds of the early-exit network via a few simple regression models. Extensive experiment results demonstrate the effectiveness of the exit predictor in achieving a better tradeoff between accuracy and on-device computation overhead for early-exit networks. Besides, compared with the baseline methods,the proposed method for latency-aware edge inference attains higher inference accuracy under different bandwidth conditions.

  • Yutong Zhang, Boya Di, Hongliang Zhang, Lingyang Song
    Journal of Communications and Information Networks. 2023, 8(2): 99-110. https://doi.org/10.23919/JCIN.2023.10173734
    摘要 (107) PDF全文 (50) HTML (105)   可视化   收藏

    Recently, holographic multiple-input multiple-output (HMIMO) has motivated its potential use to support high-capacity data transmission with spatially quasi-continuous apertures. As a practical instance of HMIMO, reconfigurable refractive surfaces (RRSs) equipped with numerous metamaterial elements are utilized as antennas by refracting incident signals from signal sources.In this paper,we investigate a multi-user communication system with an RRS deployed as the base station (BS)’s transmit antenna. To mitigate the high overhead of accurate channel state information (CSI) acquisition, the codebook design and beam training are employed to perform beamforming. Given the large scale of RRS, users are likely to be randomly distributed in both the near and far fields around the BS, which is unknown in advance. By considering radiation characteristics in both fields, a near-far field codebook is designed to be applicable to all users, regardless of their locations. To reduce overhead, a multi-user beam training is proposed to serve all users simultaneously by enhancing each codeword capable of covering multiple areas. Considering a general case that includes users in both fields,simulation results indicate that,without prior knowledge of user distribution, the proposed scheme outperforms state-of-the-art ones in terms of sum rate and overhead.

  • 通信与信息网络学报. 2023, 8(1): 80-89. https://doi.org/10.23919/JCIN.2023.10087250
    摘要 (84) PDF全文 (49) HTML (94)   可视化   收藏

    The rejuvenation of non-geostationary orbit (NGSO) satellite communication holds the promise of seamless and ubiquitous broadband access from the space. However,the NGSO constellations must share the scarce radio spectrum resources with geostationary orbit(GSO) satellite systems, which results in dynamically changing and unevenly distributed interference to GSO systems. In this context, the ultra-large-scale NGSO constellation incurs a more complicated interference environment with GSO systems, which raises urgent demands on inter-system interference evaluation. In this case, we investigate the inter-system downlink interference from a NGSO satellite mega-constellation to a GSO earth station. Specifically, we consider the scenario where the NGSO and GSO earth stations are co-located,and apply a novel visibility analysis method in the interference modeling to reduce computation redundancy. The interference evaluation is then performed through comprehensive simulations,in which the Starlink constellation with more than 4 000 satellites is examined for the first time. The simulation results demonstrate various states of interference on the GSO earth station at different deployment locations. It reveals that the number of visible satellites could influence the angle between the main lobe directions of NGSO satellites and the GSO earth station antenna, which further affects the interference level.

  • 通信与信息网络学报. 2022, 7(3): 259-268. https://doi.org/10.23919/JCIN.2022.9906940
    摘要 (98) PDF全文 (48) HTML (31)   可视化   收藏

    In this paper, we propose a transmission scheme for uplink and downlink transmissions,where the fifth generation (5G) low-density parity-check (LDPC) codes are implemented for error correction. In the proposed scheme,the acknowledgment(ACK)or negative acknowledgment(NACK)feedback information is transmitted along with the payload data by cyclically shifting coded sequence, while the re-transmitted codewords are superimposed (XORed) partially on the current codewords. The distinguished feature of the proposed transmission scheme is that it requires neither extra transmission bandwidth nor extra transmission power. We also propose to truncate the error patterns for the purpose of reducing the implementation complexity and reducing the error propagation. Numerical results show that the proposed scheme significantly outperforms conventional LDPC-coded transmission. For the 5G LDPC code with length 1 920 at the signal-to-noise ratio(SNR)of 1.3 dB,the word error rate(WER)of the data transmitted by the proposed scheme is about 10-4, while that of the conventional LDPC-coded transmission is about 10-2.