Please wait a minute...

Current Issue

    20 October 2024, Volume 40 Issue 10
    Research and Development
    Research on satellite-ground adaptive modulation and coding techniques based on intellgent prediclion of channel state
    Shuo JI, Yaohua SUN, Mugen PENG
    2024, 40(10):  1-13.  doi:10.11959/j.issn.1000-0801.2024214
    Asbtract ( 183 )   HTML ( 19)   PDF (3508KB) ( 100 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In the scenario of direct mobile phone connection to low-earth orbit satellites, to address the issue of non-real-time feedback of channel quality indication information that adaptive modulation and coding techniques rely on, a channel state prediction model based on deep echo state network was proposed. Furthermore, an intelligent selection mechanism for modulation and coding schemes was introduced based on the prediction model, wherein the transmitter selects the modulation and coding scheme suited for the current channel conditions based on the channel prediction results. Simulation validation has demonstrated that adaptive modulation and coding based on channel prediction can improve the bit error performance of the link to a certain extent.

    An industrial Internet-of-things URLLC system via cross-layer design for anti-jamming
    Xiang WANG, Peiyi ZHAO, Qi ZENG, Jun ZHONG, Xing ZHANG, Jingru SU
    2024, 40(10):  14-26.  doi:10.11959/j.issn.1000-0801.2024219
    Asbtract ( 87 )   HTML ( 6)   PDF (3340KB) ( 81 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The classical OFDM multicarrier waveforms for transmission are adopted by the existing 5G/B5G URLLC standard (3GPP R17-18), and less attention is paid to the anti-interference strategy of URLLC-OFDM multicarrier transmission due to its operation in the licensed frequency band. In the future, unlicensed frequency bands will host most heterogeneous multi-QoS services for IIoT, resulting in complex wireless communication links. The stringent requirements of high reliability and low latency for IIoT information transmission can not be fully met by the existing URLLC-OFDM waveforms. Firstly, the more robust Sub-FH technique was applied to OFDMA based on subcarrier-configurable OFDMA (i.e., Sub-FH/OFDMA) to improve the signal transmission reliability. Additionally, the Sub-FH/OFDMA waveforms were incorporated into a scheduling strategy with mini-slots as the basic unit. Hamming coding + mini-slot hopping HARQ were utilized by this scheduling strategy to effectively reduce the amount of retransmissions, which aim to enhance the real-time transmission among IIoT nodes. The theoretical relationship between BER/BLER and transmission delay was derived and compromises were made. Simulation results demonstrate that the scheme can ensure stable transmission quality of IIoT nodes despite external electromagnetic interference and internal multi-user interference, achieving a transmission delay of milliseconds when the target BLER is 10-5. A feasible solution for the future practical application of B5G/6G communication in complex IIoT scenarios is provided through the cross-layer design of waveform design and MAC slot scheduling in this paper.

    Mixed signal modulation recognition method based on temporal depth residual shrinkage network
    Jinghua LIU, Xianglin WEI, Jianhua FAN, Yongyang HU, Xiaobo WANG, Bing YU
    2024, 40(10):  27-38.  doi:10.11959/j.issn.1000-0801.2024207
    Asbtract ( 100 )   HTML ( 9)   PDF (4061KB) ( 173 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Deep learning-based automatic signal modulation recognition has generally outperformed traditional methods in terms of classification accuracy and transferability, garnering widespread attention. However, most existing methods are designed to recognize single signal samples and are not applicable to recognize scenarios involving overlapping signals. To address this limitation, a modulation recognition method for aliased signals was investigated and a temporal deep residual shrinkage network model by integrating LSTM and DRSN was developed. There were three key modules in the model: a residual module, a shrinkage module, and a LSTM module. Salient information from overlapping signals was extracted by the residual module and the shrinkage module and decision thresholds were adaptively generated, while the LSTM module is tasked with extracting temporal hidden signals within the aliased data. The recognition accuracy of aliased signals was enhanced by the combination of these modules significantly. Testing on both public and private datasets demonstrates that the proposed method outperforms five state-of-the-art approaches, achieving an average recognition and classification accuracy of 92.7% under high signal-to-noise ratio conditions. Notably, the recognition accuracy for 12 out of 21 types of aliased signals approaches 100%.

    Deep reinforcement learning-based resource joint optimization for millimeter-wave massive MIMO systems
    Qingli LIU, Xiaoyu LI, Rui LI
    2024, 40(10):  39-51.  doi:10.11959/j.issn.1000-0801.2024217
    Asbtract ( 114 )   HTML ( 17)   PDF (2011KB) ( 79 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem of low throughput and energy efficiency caused by limited wireless resources, huge power consumption, and mutual constraints between energy efficiency and system capacity in millimeter-wave large-scale multiple-input multiple-output systems, a resource co-optimization method based on deep reinforcement learning was proposed. The method was adopted in a three-stage strategy, firstly, an RF beamformer was constructed to reduce the hardware cost and total power consumption through a small number of RF chains; secondly, a baseband precoder was designed using the effective channel state information; and finally, a two-tier deep reinforcement learning architecture was designed and applied to realize dynamic discrete bandwidth and continuous power resource allocation. Experimental results show that the proposed joint optimization method significantly improves the throughput and energy efficiency of the system compared with the single-stage all-digital precoding and hybrid precoding equal resource allocation methods and the particle swarm optimization-based resource allocation algorithm.

    Patch-based domain adversarial training for speech enhancement
    Hongtao WANG, Zhihua LU, Qingwei YE, Lianjun ZHANG
    2024, 40(10):  52-60.  doi:10.11959/j.issn.1000-0801.2024225
    Asbtract ( 87 )   HTML ( 2)   PDF (1529KB) ( 72 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In deep learning-based speech enhancement methods, mismatched distributions between training data and test data are often encountered. These mismatches can include differences in speakers, speech content, noise types, and signal-to-noise ratios between the datasets. Severe data mismatches can significantly degrade the performance of speech enhancement. To address this issue, a speech enhancement method based on Patch domain adversarial training was proposed. Building on previous domain adversarial training methods for speech enhancement, implicit modeling of a domain discriminator was employed, allowing the entire speech signal to be divided into multiple independent patches for discrimination. Adaptive learning of the training data was enabled, thereby reducing distribution differences between the training and test data and improving the model’s enhancement capabilities on test data. Experimental results show that this method exhibits superior performance compared to previous methods under various degrees of data mismatch and maintains good stability as an adversarial training approach.

    Research on dynamic allocation of network slicing resources based on OS-MBRL
    Jiahui YAN, Weixuan ZHONG, Ligang DONG, Xian JIANG, Guangchang WANG, Lingrong LU
    2024, 40(10):  61-77.  doi:10.11959/j.issn.1000-0801.2024228
    Asbtract ( 78 )   HTML ( 2)   PDF (5742KB) ( 66 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the growth of business needs of network users, how to achieve dynamic and accurate resource allocation of network slicing is a problem that must be solved in the current network. Considering that traditional modelless reinforcement learning methods require a longer model training time, a dynamic resource allocation method based on OS-MBRL was proposed. The online support vector machines algorithm was utilized to construct a system model that could handle dynamically changing data streams and continuously update the model to adapt to new data, ensuring a lower number of SLA violations when allocating fewer resources. Simulation experiment results show that compared with NAF algorithm, DQN algorithm, and TD3 algorithm, the proposed method can reduce SLA violations by up to 80% and resource allocation by 9%.

    Building the future: innovative application and development prospect of telecommunication big data open platform in for intelligent social governance
    Baohua QIU
    2024, 40(10):  78-85.  doi:10.11959/j.issn.1000-0801.2024218
    Asbtract ( 100 )   HTML ( 6)   PDF (2038KB) ( 94 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The importance of big data in telecommunications for the governance of intelligent societies was delved into. Taking into account the current policy environment, the research objectives and contributions was elucidated. The theoretical foundations of telecommunications big data, its developmental trends, and its status both domestically and internationally was presented, and the necessity and application directions for constructing a big data open platform was discussed. The solutions to key issues, including multi-domain data fusion, spatiotemporal data model construction, and big data openness strategies, was deeply analyzed, and the project outcomes and their impact at the technological, economic, and social levels was forecasted. This study provides a valuable reference for the future application of telecom big data.

    Heterogeneous computing network resource management based on virtual network embedding
    Jinghang YU, Yichen ZHAO, Ling WANG, Xin CHEN, Haodong ZOU
    2024, 40(10):  86-99.  doi:10.11959/j.issn.1000-0801.2024230
    Asbtract ( 129 )   HTML ( 9)   PDF (3399KB) ( 150 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the booming development of new applications and demands led by large AI models, computing scale and technology are experiencing unprecedented rapid evolution and diversified innovation. However, as computing networks show a trend of clustering and heterogeneity, the contradiction between the rapid growth of computing demand and the inefficiency of resource utilization has become increasingly prominent. How to achieve unified and efficient management of heterogeneous computing to improve resource utilization has become an important research topic. Based on network virtualization (NV) technology, a heterogeneous cross-domain computing resource allocation method based on virtual network embedding (VNE) was proposed. Specifically, a policy network based on a deep reinforcement learning (DRL) model was constructed to accurately calculate candidate computing nodes and links for optimal resource allocation. Through a series of simulation experiments, it verifies the effectiveness of this method and provids new ideas for solving the problem of heterogeneous computing management.

    An access control scheme for IoT based on smart contracts and CP-ABE
    Changxia SUN, Chuanhu ZHANG, Bingjie LIU, Yingjie YANG, Fernando BAÇÃO, Qian LIU
    2024, 40(10):  100-115.  doi:10.11959/j.issn.1000-0801.2024227
    Asbtract ( 144 )   HTML ( 13)   PDF (4388KB) ( 125 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    As the number of Internet of things (IoT) devices increases, traditional centralized access control solutions are inadequate for the current large-scale IoT environment. Existing distributed access control schemes suffer from high monetary costs and low throughput in processing access requests. To address these issues, a blockchain smart contract combined with ciphertext policy attribute based encryption (CP-ABE) was proposed to implement access control for IoT resources. Using Hyperledger Fabric as the underlying network, attribute-based encryption was applied to functional tokens, and token ciphertexts were stored using the interplanetary file system (IPFS). Through smart contracts, token retrieval addresses were publicly exposed to achieve 1-to-N authorization. Furthermore, contracts were designed to be deployed on the blockchain for decentralized permission evaluation of token requests, maintaining the allowed operations for subjects on specific resource objects, realizing more fine-grained attribute-based access control. Simulation experiments and performance analysis demonstrate that compared to existing solutions, this approach enables data owners to securely authorize access for a large number of requesting subjects in a shorter time frame. Stress tests show that the chaincode performs well.

    Research on 5G lite approaches and implementation strategies
    Jianshu QIU, Ziyuan ZHU, Yu SHI
    2024, 40(10):  116-123.  doi:10.11959/j.issn.1000-0801.2024221
    Asbtract ( 141 )   HTML ( 20)   PDF (1289KB) ( 68 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    5G lite is achieved by appropriately reducing performance metrics to trade off for lower terminal costs and power consumption, thereby meeting the needs of application scenarios such as the IoT. In conjunction with the 3GPP’s RedCap technology specifications, 5G light-weighting methods, their benefits, and impacts on the network was presented. A comparative analysis of two implementation approaches was also conducted: terminal and network coordinated reduction, and terminal self-reduction. Based on this analysis, a converged 5G lite terminal implementation scheme was proposed, enabling terminal products to better adapt to varied 5G lite deployment strategies of different operators.

    Research on new energy station network security assessment method based on improved LSTM network
    Shan LIU, Rui LI, Yao WANG
    2024, 40(10):  124-133.  doi:10.11959/j.issn.1000-0801.2024226
    Asbtract ( 88 )   HTML ( 5)   PDF (3168KB) ( 52 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problem of the inability of the existing network security protection system for new energy stations to meet the needs of network anomaly monitoring and alarm caused by the large-scale integration of new energy, a new energy station network security assessment method based on an improved long short-term memory network was proposed. Firstly, based on the architecture of the new energy station network system, the reasons for network security incidents were analyzed. Secondly, based on the random forest algorithm, the Gini coefficient of new energy station network traffic was solved, and then the important coefficients of all network traffic features were calculated to select important features. Finally, important features were input into the long short-term memory network, and attention mechanisms were used to adaptively allocate data time and features, strengthening the emphasis on important time and features in network traffic, thereby improving the accuracy of the model for network security assessment. The experimental results show that this method can accurately evaluate the network security status of new energy power stations. Compared with support vector machines, convolutional neural networks, and traditional long short-term memory networks, the evaluation accuracy has been improved by 12.65%, 9.34% and 8.79%, respectively, enhancing the perception, evaluation, and alarm capabilities of network security status in new energy power systems.

    Review
    Further discussion on strategic suggestions for promoting the construction of Chinas satellite Internet —— Consideration based on the technology route and commercial closed loop of satellite direct connection to mobile phone
    Lingcai YANG, Dong ZHAO, Huxiao YANG, Yankun LI, Ye TIAN
    2024, 40(10):  134-143.  doi:10.11959/j.issn.1000-0801.2024222
    Asbtract ( 154 )   HTML ( 20)   PDF (1134KB) ( 237 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Accelerating the construction of satellite Internet infrastructure and promoting the development of new technologies, new models and new business forms of the digital economy such as satellite direct connection to mobile phone, has become a key development direction of the global information and communication industry. The application of LEO (low earth orbit) constellation direct connection to mobile phone is facing huge technical challenges. China’s satellite Internet technology maturity and commercialization system are not perfect, and it faces the mismatch between different demand and global coverage, industrial investment and commercial return, launch demand and carrying capacity. It was suggested to combine national characteristics, market characteristics, and technology maturity. Give full play to China’s industrial advantages in the field of terrestrial and satellite mobile communications, explore technological evolution routes and commercial development paths suitable for China’s LEO constellation direct connection to mobile phone, promote the connection of innovation chain, technology chain, industrial chain and capital chain, and form an open, efficient, complete and secure satellite Internet industry and business ecology. To construct a satellite Internet construction and operation mode suitable for national conditions, and build a new engine for promoting the development of digital economy and new quality productivity in our country.

    Engineering and Application
    Research on intelligent computing power prediction based on AI large models
    Shuangjie LI, Chunxiang DU, Ziyu XIAO
    2024, 40(10):  144-151.  doi:10.11959/j.issn.1000-0801.2024215
    Asbtract ( 261 )   HTML ( 48)   PDF (1597KB) ( 162 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    AI large models, represented by ChatGPT, have driven a rapid growth in intelligent computing power demand. Telecom operators and cloud service providers are increasingly expanding their intelligent computing center layouts. The future trends of large models and the scale of their computing power demand are of great importance for the layout and construction of intelligent computing centers. Starting from the consumption characteristics of computing power by AI large models, the key influencing factors of computing power consumption were analyzed. Based on multi-level models with different parameter scales and industry types, the intelligent computing power demand of AI large models was systematically analyzed, providing clear business inputs and support for the construction of intelligent computing centers.

    Construction and application of a multi-objective rolling horizon optimization mathematical model in intelligent dispatching for telecom operators
    Fang LI, Xiaoliang MA, Ying LIU, Yuan LI, Sheng XIN
    2024, 40(10):  152-162.  doi:10.11959/j.issn.1000-0801.2024224
    Asbtract ( 101 )   HTML ( 6)   PDF (1288KB) ( 62 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of the telecommunications industry, intelligent dispatching has become a key means of enhancing service quality and efficiency for operators. However, traditional dispatching methods struggle to simultaneously meet the multiple demands of operators for service efficiency and quality. A mathematical model based on multi-objective optimization was proposed, which was set with three objectives: total expected customer satisfaction score, work order response time, and agent workload balance. Additionally, an adaptive multi-objective optimization algorithm was designed, incorporating techniques such as adaptive weight adjustment, multi-objective optimization, and rolling horizon optimization to solve the multi-objective optimization model. Experimental results from a provincial telecom operator's agent system demonstrate that the model can improve both the service efficiency and service quality of telecom operators.

    Customer churn prediction based on the integration of meta-learning network of the forest
    Longge LI, Kengcheng ZHENG
    2024, 40(10):  163-172.  doi:10.11959/j.issn.1000-0801.2024159
    Asbtract ( 92 )   HTML ( 6)   PDF (3170KB) ( 60 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To address the challenge of capturing temporal features in customer churn prediction tasks by tree models, a churn prediction method based on ensemble forest meta-learning network (EFML) was proposed. Firstly, data quality was improved through grouping strategies and class imbalance issues were addressed with undersampling techniques. Secondly, semantic vectors of user temporal features were constructed using EFML’s semantic graph constructor to depict fine-grained user behavior, forming a semantic graph and explicitly integrating it. Finally, multiple base tree models were trained as meta-learners, with the inputs being multilayer perceptron (MLP) to generate comprehensive churn prediction results. Experimental results demonstrate that EFML can effectively exploit differences in customer historical behaviors, capture and learn complementary relationships between base tree models. Compared to random forest (RF), EFML shows a 2.7% increase in AUC, a 3.7% increase in AP, and a significant improvement in prediction accuracy. This framework, combining tree models and micro-level features, possesses excellent interpretability, providing a new perspective for operators to achieve more refined user-centric management.

    Research on antenna dip angle recognition method based on improved YOLOv5
    Ning ZHANG, Feng PAN, Lujing GENG, Zuhao CHEN, Tingting XU
    2024, 40(10):  173-181.  doi:10.11959/j.issn.1000-0801.2024223
    Asbtract ( 98 )   HTML ( 5)   PDF (2008KB) ( 49 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In order to achieve efficient and accurate measurement of antenna dip angle and meet the large-scale and efficient measurement requirements in wireless optimization operation and maintenance scenarios, the YOLOv5 target detection framework was cleverly applied in the complex scenario of antenna dip angle measurement, and it was improved to make it suitable for complex antenna detection and attitude recognition tasks, and accurately predict the dip angle. Experimental results show that the improved YOLOv5 model has the same detection capability as the original version, while its downdip prediction error is reduced by 13%, and the absolute prediction error is 0.635°. The improved YOLOv5 model not only guarantees high accuracy, but also significantly improves the measurement accuracy of antenna dip angle, providing a new technical path and reference basis for wireless optimization intelligent operation and maintenance.

Copyright Information
Authorized by: China Association for Science and Technology
Sponsored by: China Institute of Communications
Posts and Telecom Press Co., Ltd.
Publisher: Beijing Xintong Media Co., Ltd.
Editor-in-Chief: Chen Shanzhi
Editorial Director: Li Caishan
Address: F2, Beiyang Chenguang Building, Shunbatiao No.1 Courtyard, Fengtai District, Beijing, China
Postal Code: 100079
Tel: 010-53879277
        010-53879278
        010-53879279
E-mail: dxkx@ptpress.com.cn
Mailing Code: 2-397
ISSN 1000-0801
CN 11-2103/TN
Visited
Total visitors:
Visitors of today:
Now online: